IdentityServer4 - Different behavior from docker-compose than manual run - docker

I have three different projects in .net core 3.1 :
STS
Admin UI
Admin API
I have a dockerfile by project which are building my images well.
If I run my containers manually with these commands everything is ok :
docker run -e "ASPNETCORE_ENVIRONMENT=Development" -e "ASPNETCORE_URLS=http://+:80" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -p 5001:80 stsidentity:latest
docker run -e "ASPNETCORE_ENVIRONMENT=Development" -e "ASPNETCORE_URLS=http://+:80" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -p 5003:80 adminapi:latest
docker run -e "ASPNETCORE_ENVIRONMENT=Development" -e "ASPNETCORE_URLS=http://+:80" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -p 9001:80 admin:latest
My next step was to create a docker-compose to automatized it but I have some issues.
From Visual Studio, when I execute the docker-compose below, containers sts and api are working well.
When I try to access my Admin UI I have an error "InvalidOperationException: IDX20803: Unable to obtain configuration from: 'http://localhost:5001/.well-known/openid-configuration'."
If manually I copy/paste that Url in my browser I can access it normally.
I don't understand why a different behavior between the docker-compose and when I run containers manually from the same images.
version: "3.4"
services:
admin:
image: ${DOCKER_REGISTRY-}admin:latest
ports:
- "9001:80"
build:
context: .
dockerfile: src/IdentityServer/Admin/Dockerfile
container_name: IS4-admin
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
- DOTNET_USE_POLLING_FILE_WATCHER=1
depends_on:
- sts.identity
- admin.api
admin.api:
image: ${DOCKER_REGISTRY-}admin-api:latest
ports:
- "5003:80"
build:
context: .
dockerfile: src/IdentityServer/Admin.Api/Dockerfile
container_name: IS4-admin-api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
- DOTNET_USE_POLLING_FILE_WATCHER=1
depends_on:
- sts.identity
sts.identity:
image: ${DOCKER_REGISTRY-}sts-identity:latest
ports:
- "5001:80"
build:
context: .
dockerfile: src/IdentityServer/STS.Identity/Dockerfile
container_name: IS4-sts-identity
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
- DOTNET_USE_POLLING_FILE_WATCHER=1
It's probably something obvious but I don't see it.
--
-- Edit
--
I read it was a problem possibly related to localhost. So I tried to use a Traefik container as a reverse proxy and at the end I have the same issue.
I'm starting to think that my problem is maybe not related to containers but just with the fact to execute it in something different than my localhost on IISExpress.
From what I read on IdentityServer4 it could come from a problem with a certificate. I think I put everything in HTTP and I don't understand why I would have a problem with it in HTTP.
I succeed to obtain a more detailed error message :
IOException: IDX20807: Unable to retrieve document from: 'http://login.traefik.me/.well-known/openid-configuration'. HttpResponseMessage: 'StatusCode: 404, ReasonPhrase: 'Not Found', Version: 1.1, Content: System.Net.Http.HttpConnectionResponseContent, Headers:
{
Date: Wed, 08 Jul 2020 13:21:07 GMT
X-XSS-Protection: 1; mode=block
X-Content-Type-Options: nosniff
X-Frame-Options: SameOrigin
Referrer-Policy: no-referrer
Content-Security-Policy: script-src 'self' 'unsafe-inline' 'unsafe-eval';style-src 'self' 'unsafe-inline' https://fonts.googleapis.com/ https://fonts.gstatic.com/;font-src 'self' https://fonts.googleapis.com/ https://fonts.gstatic.com/
Content-Length: 0
}', HttpResponseMessage.Content: ''.
Microsoft.IdentityModel.Protocols.HttpDocumentRetriever.GetDocumentAsync(string address, CancellationToken cancel)
InvalidOperationException: IDX20803: Unable to obtain configuration from: 'http://login.traefik.me/.well-known/openid-configuration'.
Microsoft.IdentityModel.Protocols.ConfigurationManager<T>.GetConfigurationAsync(CancellationToken cancel)
Thanks in advance,
Have a nice day

This is exactly the same than the one I was getting with my Docker Compose solution.
Here is a simplified overview of the solution:
Api
Admin
IdentityServer4
Traefik
I configured Traefik to forward requests on 'api.localhost' to the Api container, requests on 'admin.localhost' to be forwarded to the Admin container and request to 'identity.localhost' to be forwarded to the IdentityServer4 container.
Like you, request on both 'api.localhost', 'admin.localhost' and 'identity.localhost' are working fine in a browser.
The problem appears when I try to login on the Api or the Admin containers, because inter-container communication are not handled by Traefik. Furthermore, DNS such as '*.localhost' are handled in the container itself, for it is a loopback DNS, and is therefore not resolved as it should.
My solution to make it work as expected is the following:
STEP 1: Create a new network in your docker-compose file. That network is created to add your own subnet, and therefore specify a static IP for your Traefik container.
networks:
traefik:
ipam:
driver: default
config:
- subnet: 172.30.0.0/16 #replace by your desired subnet
STEP 2: Configure the Traefik container's static IP address:
networks:
default: #required to talk to containers
traefik:
ipv4_address: 127.30.1.1 #can be any address compatible with the subnet defined at step 1
STEP 3: Add the newly created network to all the containers that rely on IdentityServer4, and do not forget to add the 'default' network too. We will, in addition, create extra hosts corresponding to the DNS of our IdentityServer4 container, as configured in Traefik. Those additional hosts will point to the static IP address of the Traefik container.
admin:
image: ...
environment: ...
labels: ...
networks:
traefik:
default:
extra_hosts:
- "identity.localhost: 127.30.1.1 #replace by the configured traefik static address
At this point, if the Admin container tries to resolve the 'identity.localhost' DNS, it will be forwarded to Traefik, which in turn will redirect to the IdentityServer4 container: yaaai!
If you test it out, however, you will still run in a couple of issues, all related to HTTPS (which I strongly recommend you enable and configure)
Those issues are:
The SSL certificates you have configured in Traefik might not be trusted by your containers. If so, make sure to add the root CA (in case of self-signed certificates) in the container's '/etc/ssl/certs' directory.
To do so, you have to mount a volume, such as in the following example:
admin:
image: ...
environment: ...
labels: ...
networks:
traefik:
default:
extra_hosts:
- "identity.localhost: 127.30.1.1
volumes:
- "./localPathToMyCACertificateFiles/:/etc/ssl/certs/"
When Admin or Api containers will try to retrieve the well known configuration on the IdentityServer, it will probably complain that configuration endpoints are in http, instead of https. To fix this issue, you will need to configure forwarded headers for the IdentityServer4 containers and all containers relying on it. Check this out if you don't know how to do so. An important point is to make sure that 'RequireHeaderSymmetry' is set to false.
This should do the trick.

Related

Using custom local domain with Docker

I am running Docker using Docker Desktop on Windows.
I would like to set-up a simple server.
I run it using:
$ docker run -di -p 1234:80 yahya/example-server
This works as expected and runs fine on localhost:1234.
However, I want to give it's own local domain name (e.g. api.example.test), which should only be accessible locally.
Normally for a VM setup I would edit the Windows hosts file, get the IP address of the VM (let's say it's 192.168.90.90) and add something like the following:
192.168.90.90 api.example.test
How would I do something similar in Docker.
I know you can enter an ip address for port forwarding, but if I enter any local IP I get the following error:
$ docker run -di -p 192.168.90.90:1234:80 yahya/example-server
docker: Error response from daemon: Ports are not available: exposing port TCP 192.168.90.90:80 -> 0.0.0.0:0: listen tcp 192.168.90.90:80: can't bind on the specified endpoint.
However, it does work for 10.0.0.7 for some reason (I found this IP automatically added in the hosts file after installing Docker Desktop).
$ docker run -di -p 10.0.0.7:1234:80 yahya/example-server
This essentially solves the issue, but would become an issue again if I have more than 1 project.
Is there a way I can use another local IP address (preferably without a nginx proxy)?
I think there is no simple way to do this without some kind of reverse-proxy.
In my dev environment I use Traefik and dnscrypt-proxy to achieve automatic *.test domain names for multiple projects at same time
First, start Traefik proxy on ports 80 and 433, example docker-compose.yml:
---
networks:
traefik:
name: traefik
services:
traefik:
image: traefik:2.8.3
container_name: traefik
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
ports:
- 80:80
- 443:443
environment:
TRAEFIK_API: 'true'
TRAEFIK_ENTRYPOINTS_http: 'true'
TRAEFIK_ENTRYPOINTS_http_ADDRESS: :80
TRAEFIK_ENTRYPOINTS_https: 'true'
TRAEFIK_ENTRYPOINTS_https_ADDRESS: :443
TRAEFIK_ENTRYPOINTS_https_HTTP_TLS: 'true'
TRAEFIK_GLOBAL_CHECKNEWVERSION: 'false'
TRAEFIK_GLOBAL_SENDANONYMOUSUSAGE: 'false'
TRAEFIK_PROVIDERS_DOCKER: 'true'
TRAEFIK_PROVIDERS_DOCKER_EXPOSEDBYDEFAULT: 'false'
Then, attach your service to traefik network, and set labels for routing (see Traefik & Docker). Example docker-compose.yml:
---
networks:
traefik:
external: true
services:
example:
image: yahya/example-server
restart: always
labels:
traefik.enable: true
traefik.docker.network: traefik
traefik.http.routers.example.rule: Host(`example.test`)
traefik.http.services.example.loadbalancer.server.port: 80
networks:
- traefik
Finally, add to hosts:
127.0.0.1 example.test
Instead of manually adding all future domains to hosts, you can setup local DNS resolver. I prefer to use cloaking feature of dnscrypt-proxy for this.
You can install it using Installation instructions, then uncomment following line in dnscrypt-proxy.toml:
cloaking_rules = 'cloaking-rules.txt'
and add to cloaking-rules.txt:
*.test 127.0.0.1
finally, setup your network connection to use 127.0.0.1 as DNS resolver

Traefik with Docker-Compose not working as expected

I am fairly new to using traefik, so I might be totally missing something simple, but I have the following docker-compose.yaml:
version: '3.8'
services:
reverse-proxy:
container_name: reverse_proxy
restart: unless-stopped
image: traefik:v2.0
command:
- --entrypoints.web.address=:80
- --entrypoints.web-secure.address=:443
- --api.insecure=true
- --providers.file.directory=/conf/
- --providers.file.watch=true
- --providers.docker=true
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./scripts/certificates/conf/:/conf/
- ./scripts/certificates/ssl/:/certs/
networks:
- bnkrl.io
labels:
- "traefik.enable=true"
- "traefik.http.routers.api.rule=Host(`traefik.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
bankroll:
container_name: bankroll
build:
context: .
ports:
- "3000"
volumes:
- .:/usr/src/app
command: yarn start
networks:
- bnkrl.io
labels:
- "traefik.http.routers.bankroll.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.docker.network=bnkrl.io"
- "traefik.http.services.bankroll.loadbalancer.server.port=3000"
- "traefik.http.routers.bankroll-https.rule=Host(`bankroll.bnkrl.io`)"
- "traefik.http.routers.bankroll-https.tls=true"
networks:
bnkrl.io:
external: true
But for some reason the following is happening:
Running curl when ssh'd into my bankroll container gives the following:
/usr/src/app# curl bankroll.bnkrl.io
curl: (7) Failed to connect to bankroll.bnkrl.io port 80: Connection refused
Despite having - "traefik.http.services.bankroll.loadbalancer.server.port=3000" label set up.
I am also unable to hit traefik from my application container:
curl traefik.bnkrl.io
curl: (6) Could not resolve host: traefik.bnkrl.io
Despite my expectation to be able to do so since they are both on the same network.
Any help with understanding what I might be doing wrong would be greatly appreciated! My application (bankroll) is a very basic hello-world react app, but I don't think any of the details around that are relevant to the issue I'm facing.
EDIT: I am also not seeing any error logs on traefik side of things.
You are using host names that are not declared and therefore are unreachable.
To reach a container from another container, you need to use the service name, for example, if you connect to bankroll from the reverse-proxy it will hit the other service.
While if you want to access them from the host machine, you will have to publish the ports (which you did, it's all the stuff in ports in your Docker-compose file) and access from localhost or from your machine local IP address instead of traefik.bnkrl.io.
If you want to access from traefik.bnkrl.io, you will have to declare this host name, and point it to the place where the Docker containers are running from.
So either a DNS record in the domain bnkrl.io pointing to your local machine, or a HOSTS file entry in your computer pointing to 127.0.0.1.
Another note: For SSL you are going to need a valid certificate to use for the host name. While in local development, you can use the self-signed certificate provided by Traefik, but you may have to install it in the computer connecting to the service, or allow untrusted certificates from your browser, or wherever you are making the requests from (some browsers no longer support using self-signed certificates). For SSL on the Internet you will need to look at things like Let's Encrypt.

Minio / Keycloak integration: connection refused

I am trying to connect MinIO with KeyCloak and I follow the instructions provided in this documentation:
https://github.com/minio/minio/blob/master/docs/sts/keycloak.md
What I have done so far is deploy a Docker container for the MinIO server, another one for the MinioClient and a third one used for the KeyCloak server.
As you can see in the following snippet the configuration of the Minio Client container is done correctly, since I can list the buckets available in the Minio Server:
mc ls myminio
[2020-05-14 11:54:59 UTC] 0B bucket1/
[2020-05-06 12:23:01 UTC] 0B bucket2/
I have an issue arising when I try to configure MinIO as depicted in step 3 (Configure MinIO) of the documentation. In more detail, the command that I run is this one:
mc admin config set myminio identity_openid config_url="http://localhost:8080/auth/realms/demo/.well-known/openid-configuration" client_id="account"
And the error I get is this one:
mc: <ERROR> Cannot set 'identity_openid config_url=http://localhost:8080/auth/realms/demo/.well-known/openid-configuration client_id=account' to server. Get http://localhost:8080/auth/realms/demo/.well-known/openid-configuration: dial tcp 127.0.0.1:8080: connect: connection refused.
When I curl this address http://localhost:8080/auth/realms/demo/.well-known/openid-configuration from the MinIO Client container though, I retrieve the JSON file.
Turns out, all I had to do is change the localhost in the config_url, from localhost to the IP of the KeyCloak container (172.17.0.3).
This is just a temporary solution that works for now, but I will continue searching for something more concrete than just hardcoding the IP.
When I figure out the solution, this answer will be updated.
Update
I had to create a docker-compose.yml file as the one below in order to overcome the issues without having to manually place the IP of the KeyCloak container.
version: '2'
services:
miniod:
image: minio/minio
restart: always
container_name: miniod
ports:
- 9000:9000
volumes:
- "C:/data:/data"
environment:
- "MINIO_ACCESS_KEY=access_key"
- "MINIO_SECRET_KEY=secret_key"
command: ["server", "/data"]
networks:
- minionw
mcd:
image: minio/mc
container_name: mcd
networks:
- minionw
kcd:
image: quay.io/keycloak/keycloak:10.0.1
container_name: kcd
restart: always
ports:
- 8080:8080
environment:
- "KEYCLOAK_USER=admin"
- "KEYCLOAK_PASSWORD=pass"
networks:
- minionw
networks:
minionw:
driver: "bridge"
Connection refused occurs when a port is not accessible on the hostname or IP we specified.
Please try exposing the port using --expose flag along with the port number which you wish to expose when using the docker CLI. Then being exposed, you can access on it on localhost

HTTPS-Redirect with Traefik behind Aws Loadbalancer

I'm trying to redirect all incoming Traefik from http to https, for a web application which gets served out of a docker container with a custom port.
If I build this docker compose file, and scale the application everything works as expected. I'm able to request http and https of the application, but I try to accomplish that only https get served and http gets redirected to https.
Since I use a Docker-Compose file, I don't have a Traefik.toml, and try to accomplish this without one.
Docker Compose:
traefik:
image: traefik:latest
command:
- "--api"
- "--docker"
- "--docker.domain=example.com"
- "--logLevel=DEBUG"
- "--docker.watch"
labels:
- "traefik.enable=true"
ports:
- "80:80"
- "8080:8080"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
application:
image: application
command: web
tty: false
stdin_open: true
restart: always
expose:
- "8081"
labels:
- "traefik.backend=application"
- "traefik.frontend.rule=HostRegexp:{subdomain:[a-z]+}.example.com"
- "traefik.frontend.priority=1"
- "traefik.enable=true"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
I try'd different variations on the application container, such as:
- "traefik.frontend.entryPoints=http,https"
- "traefik.frontend.redirect.entryPoint=https"
- "traefik.frontend.headers.SSLRedirect=true"
But the maximum I could accomplish was a to many redirects response, with the SSLRedirect label, and without I get the following from traefik and neither http or https requests get forwarded correctly.
level=error msg="Recovered from panic in http handler: runtime error: invalid memory address or nil pointer dereference"
Can anyone push me in the right direction?
Thanks in advance ;)
I run under the following Settings
user:~$ docker --version
Docker version 1.13.1, build 092cba3
user:~$ docker-compose --version
docker-compose version 1.8.0
Docker PS Response
IMAGE COMMAND ... PORTS NAMES
application "dotnet Web..." ... 8081/tcp components_application_1
traefik:latest "/traefik --api --..." ... 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:8080->8080/tcp components_traefik_1
Infrasturcture Setup
aws-elb => vpc => ec2...ecn
traefik per instance,
n applications per instance
This only works until traefik v1.7, after v2.* you need another config setup, which i haven't figured out yet
After a deeper research, i found the solution myself.
The problem was a missing label on the application Container,
after i added
- "traefik.frontend.headers.SSLProxyHeaders=X-Forwarded-Proto: https"
- "traefik.frontend.headers.SSLRedirect=true"
on my application containers it worked like a charm with a clear 301 redirect.
Why the need of the header, in default the aws-elb takes a https request and forwards it with a HTTP(80) to the connected Instance, during this process the elb adds the X-Forwarded-Proto: https Header to the request.
Since traefik doesn't know that it is running behind an elb it does the redirect over and over again. But the Header stops this behavior.

Local hostnames for Docker containers

Beginner Docker question here,
So I have a development environment in which I'm running a modular app, it is working using Docker Compose to run 3 containers: server, client, database.
The docker-compose.yml looks like this:
#############################
# Server
#############################
server:
container_name: server
domainname: server.dev
hostname: server
build: ./server
working_dir: /app
ports:
- "3000:3000"
volumes:
- ./server:/app
links:
- database
#############################
# Client
#############################
client:
container_name: client
domainname: client.dev
hostname: client
image: php:5.6-apache
ports:
- "80:80"
volumes:
- ./client:/var/www/html
#############################
# Database
#############################
database:
container_name: database
domainname: database.dev
hostname: database
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
ports:
- "5432:5432"
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
You can see I'm assigning a .dev domainname to each one, this works fine to see one machine from another one (Docker internal network), for example here I'm pinging server.dev from client.dev's CLI:
root#client:/var/www/html# ping server.dev
PING server.dev (127.0.53.53): 56 data bytes
64 bytes from 127.0.53.53: icmp_seq=0 ttl=64 time=0.036 ms
This works great internally, but not on my host OS network.
For convenience, I would like to assigns domains in MY local network, not the Docker containers network so that I can for example type: client.dev on my browsers URL and load the Docker container.
Right now, I can only access if I use the Docker IP, which is dynamic:
client: 192.168.99.100:80
server: 192.168.99.100:3000
database: 192.168.99.100:5432
Is there an automated/convenient way to do this that doesn't involve me manually adding the IP to my /etc/hosts file ?
BTW I'm on OSX if that has any relevance.
Thanks!
Edit: I found this Github issue which seems to be related: https://github.com/docker/docker/issues/2335
As far as I understood, they seem to say that it is something that is not available outside of the box and they suggest external tools like:
https://github.com/jpetazzo/pipework
https://github.com/bnfinet/docker-dns
https://github.com/gliderlabs/resolvable
Is that correct? And if so, which one should I go for in my particular scenario?
OK,
so since it seems that there is no native way to do this with Docker, I finally opted for this alternate solution from Ryan Armstrong, which consists in dynamically updating the /etc/hosts file.
I chose this since it was convenient for me since this works as a script, and I already had a startup script, so I could just append this function in to it.
The following example creates a hosts entry named docker.local which
will resolve to your docker-machine IP:
update-docker-host(){
# clear existing docker.local entry from /etc/hosts
sudo sed -i '' '/[[:space:]]docker\.local$/d' /etc/hosts
# get ip of running machine
export DOCKER_IP="$(echo ${DOCKER_HOST} | grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
# update /etc/hosts with docker machine ip
[[ -n $DOCKER_IP ]] && sudo /bin/bash -c "echo \"${DOCKER_IP} docker.local\" >> /etc/hosts"
}
update-docker-host
This will automatically add or udpate the /etc/hosts line on my host OS when I start the Docker machine through my startup script.
Anyways, as I found out during my research, apart from editing the hosts file, you could also solve this problem by setting up a custom DNS server:
Also found several projects on Github which apparently aim to solve this problem, although I didn't try them:
https://github.com/jpetazzo/pipework
https://github.com/bnfinet/docker-dns
https://github.com/gliderlabs/resolvable
Extending on #eduwass's own answer, here's what I did manually (without a script).
As mentioned in the question, define the domainname: myapp.dev and hostname: www in the docker-compose.yml file
Bring up your Docker containers as normal
Run docker-compose exec client cat /etc/hosts to get an output of the container's hosts file (where client is your service name)
(Output example: 172.18.0.6 www.myapp.dev)
Open your local (host machine) /etc/hosts file and add that line: 172.18.0.6 server.server.dev
If your Docker service container changes IPs or does anything fancy you will want a more complex solution, but this is working for my simple needs at the moment.
Another solution would be to use a browser with a proxy extension sending the requests through a proxy container that will know where to resolve the domains to. If you consider using jwilder/nginx-proxy for production mode, then your issue can be easily solved with mitm-nginx-proxy-companion.
Here is an example based on your original stack:
version: '3.3'
services:
server:
build: ./server
working_dir: /app
volumes:
- ./server:/app
client:
environment:
- VIRTUAL_HOST: client.dev
image: php:5.6-apache
volumes:
- ./client:/var/www/html
database:
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
nginx-proxy:
image: jwilder/nginx-proxy
labels:
- "mitmproxy.proxyVirtualHosts=true"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-proxy-mitm:
dns:
- 127.0.0.1
image: artemkloko/mitm-nginx-proxy-companion
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
Run docker-compose up
Add a proxy extension to your browser, with proxy address being 127.0.0.1:8080
Access http://client.dev
The request will follow the route:
Access a local development domain in a browser
The proxy extension forwards that request to mitm-nginx-proxy-companion instead of the “real” internet
mitm-nginx-proxy-companion tries to resolve the domain name through the dns server in the same container
If the domain is not a “local” one, it will forward the request to the “real” internet
But if the domain is a “local” one, it will forward the request to the nginx-proxy
The nginx-proxy in its turn forwards the request to the appropriate container that includes the service we want to access
Side notes:
links removed as it's outdated and is replaced by Docker networks
you don't need to add domain names to server and database containers. client will be able to access them on server and database domains because they are all in the same network (similar to what link was doing previously)
you don't need to use ports on server and database containers because it only forwards ports to be used through 127.0.0.1. PHP in client container will do only "back-end" requests to other containers, and because those containers are in the same network, you already can access them with database:5432 and server:3000. The same goes for server <-> database connections.
I am the author of mitm-nginx-proxy-companion
In order to make whole domain for localhost you can use dnsmasq. In this case if you chose the domain .dev any subdomain will point to your container. But you have to know about problems with .dev zone
Or you can use bash script for launch your docker-compose which on start will add line to /etc/hosts and after you kill this process this line will removed
#!/usr/bin/env bash
sudo sed -i '1s;^;127.0.0.1 example.dev\n;' /etc/hosts
trap 'sudo sed -i "/example.dev/d" /etc/hosts' 2
docker-compose up
My Bash script WITH ALIAS without docker-machine
Based on http://cavaliercoder.com/blog/update-etc-hosts-for-docker-machine.html
#!/bin/bash
#alias
declare -A aliasArr
aliasArr[docker_name]="alias1,alias2"
# clear existing *.docker.local entries from /etc/hosts
sudo sed -i '/\.docker\.local$/d' /etc/hosts
# iterate over each machine
docker ps -a --format "{{.Names}}" \
| while read -r MACHINE; do
MACHINE_IP="$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ${MACHINE} 2>/dev/null)"
if [[ ${aliasArr[$MACHINE]} ]]
then
DOMAIN_NAME=$(echo ${aliasArr[$MACHINE]} | tr "," "\n")
else
DOMAIN_NAME=( ${MACHINE} )
fi
for addr in $DOMAIN_NAME
do
echo "add ${MACHINE_IP} ${addr}.docker.local"
[[ -n $MACHINE_IP ]] && sudo /bin/bash -c "echo \"${MACHINE_IP} ${addr}.docker.local\" >> /etc/hosts"
export no_proxy=$no_proxy,$MACHINE_IP
done
done

Resources