Docker 18.06.1-ce, traefik 1.7.3, dnsmasq, Mac 10.14
I have docker-compose setup with Traefik and need to access several services from inside the docker network/containers and externally.
On a linux box (with Let'sEncrypt and http redirected to https), everything works using the same FQDN for both docker container internal and external access, and I don't have to use the service names.
When I run curl http://belapi.dev.biodati.test from inside the pipeline container using docker-compose exec belapi /bin/bash I get the following error (and I don't see it showing up in the Traefik access logs):
api#407cf7105aee:/app$ curl http://belapi.dev.biodati.test/status
curl: (7) Failed to connect to belapi.dev.biodati.test port 80: Connection refused
This works fine (using the servicename):
curl http://belapi:8000/status
I can also run the following fine from a bash shell on my Mac outside the docker containers (and I see it hitting the Traefik access logs):
curl http://belapi.dev.biodati.test/status
I have dnsmasq setup to forward all *.test domains to 127.0.0.1.
From inside the pipeline container:
dig belapi.dev.biodati.test
;; QUESTION SECTION:
;belapi.dev.biodati.test. IN A
;; ANSWER SECTION:
belapi.dev.biodati.test. 7 IN A 127.0.0.1
My docker-compose file:
networks:
biodati:
external: true
services:
pipeline:
container_name: pipeline
image: biodati/bel_pipeline:dev
networks:
biodati:
traefik:
image: traefik:1.7
container_name: traefik
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./conf/traefik.toml:/traefik.toml
- ./logs:/logs
networks:
biodati:
labels:
- traefik.enable=true
- traefik.backend=traefik
- traefik.frontend.rule=Host:traefik.${BS_HOST_NAME:?err}
- traefik.port=8080
- traefik.docker.network=biodati
# BEL API - core requirement
belapi:
container_name: belapi
image: belbio/bel_api:localdev
networks:
biodati:
labels:
- traefik.enable=true
- traefik.backend=belapi
- traefik.frontend.rule=Host:belapi.${BS_HOST_NAME:?err};
- traefik.port=8000
- traefik.docker.network=biodati
For full details on how to solve this: https://medium.com/#williamhayes/local-dev-on-docker-fun-with-dns-85ca7d701f0a
Basically - DNSMasq was working great, Mac Docker Desktop DNS mapping was working great. I could query for my service domain name (e.g. service1.test) dig service1.test1 and get back 127.0.0.1 which is exactly what I set up in DNSMasq. So my domain name was returning the correct IP address for my host. Except - I was getting this inside my container - so 127.0.0.1 was referring to my container environment.
Running the following command on the Mac host level in a terminal:
sudo ifconfig lo0 alias 10.254.254.254
added an alias for 127.0.0.1 that I could use in DNSMasq instead of 127.0.0.1 that would still map to my localhost but it would also work for routing from my docker containers.
Now I can use local domains on my Mac for local development in Docker and get to my containers from my host AND via inter-container requests.
Related
I am running Docker using Docker Desktop on Windows.
I would like to set-up a simple server.
I run it using:
$ docker run -di -p 1234:80 yahya/example-server
This works as expected and runs fine on localhost:1234.
However, I want to give it's own local domain name (e.g. api.example.test), which should only be accessible locally.
Normally for a VM setup I would edit the Windows hosts file, get the IP address of the VM (let's say it's 192.168.90.90) and add something like the following:
192.168.90.90 api.example.test
How would I do something similar in Docker.
I know you can enter an ip address for port forwarding, but if I enter any local IP I get the following error:
$ docker run -di -p 192.168.90.90:1234:80 yahya/example-server
docker: Error response from daemon: Ports are not available: exposing port TCP 192.168.90.90:80 -> 0.0.0.0:0: listen tcp 192.168.90.90:80: can't bind on the specified endpoint.
However, it does work for 10.0.0.7 for some reason (I found this IP automatically added in the hosts file after installing Docker Desktop).
$ docker run -di -p 10.0.0.7:1234:80 yahya/example-server
This essentially solves the issue, but would become an issue again if I have more than 1 project.
Is there a way I can use another local IP address (preferably without a nginx proxy)?
I think there is no simple way to do this without some kind of reverse-proxy.
In my dev environment I use Traefik and dnscrypt-proxy to achieve automatic *.test domain names for multiple projects at same time
First, start Traefik proxy on ports 80 and 433, example docker-compose.yml:
---
networks:
traefik:
name: traefik
services:
traefik:
image: traefik:2.8.3
container_name: traefik
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
ports:
- 80:80
- 443:443
environment:
TRAEFIK_API: 'true'
TRAEFIK_ENTRYPOINTS_http: 'true'
TRAEFIK_ENTRYPOINTS_http_ADDRESS: :80
TRAEFIK_ENTRYPOINTS_https: 'true'
TRAEFIK_ENTRYPOINTS_https_ADDRESS: :443
TRAEFIK_ENTRYPOINTS_https_HTTP_TLS: 'true'
TRAEFIK_GLOBAL_CHECKNEWVERSION: 'false'
TRAEFIK_GLOBAL_SENDANONYMOUSUSAGE: 'false'
TRAEFIK_PROVIDERS_DOCKER: 'true'
TRAEFIK_PROVIDERS_DOCKER_EXPOSEDBYDEFAULT: 'false'
Then, attach your service to traefik network, and set labels for routing (see Traefik & Docker). Example docker-compose.yml:
---
networks:
traefik:
external: true
services:
example:
image: yahya/example-server
restart: always
labels:
traefik.enable: true
traefik.docker.network: traefik
traefik.http.routers.example.rule: Host(`example.test`)
traefik.http.services.example.loadbalancer.server.port: 80
networks:
- traefik
Finally, add to hosts:
127.0.0.1 example.test
Instead of manually adding all future domains to hosts, you can setup local DNS resolver. I prefer to use cloaking feature of dnscrypt-proxy for this.
You can install it using Installation instructions, then uncomment following line in dnscrypt-proxy.toml:
cloaking_rules = 'cloaking-rules.txt'
and add to cloaking-rules.txt:
*.test 127.0.0.1
finally, setup your network connection to use 127.0.0.1 as DNS resolver
I have a docker-compose, that gathers 3 images (mariadb, tomcat and a backup service).
In the end, this exposes a 8080 port on which any user can connect using a browser.
This docker-compose seems to work nicely as I can open a browser (from the host) and browse http://localhost:8080/my service path
I did not try yet from a different machine (I do not have another one where I am currently) but since the default network type is bridge it should work also.
My docker-compose.yml looks like this:
version: "3.0"
networks:
my-network:
services:
mariadb-service:
image: *****
ports:
- "3306:3306"
networks:
- my-network
tomcat-service:
image: *****
ports:
- "8080:8080"
networks:
- my-network
depends_on:
- mariadb-service
backup-service:
image: *****
depends_on:
- mariadb-service
networks:
- my-network
(I remove all the useless stuff)
Now I also have a 'client' docker image allowing to connect to such a server (very similarly to the user with its browser). I'm running this docker image this way:
docker run --name xxx -it -e SERVER_NAME=<ip address of the server> <image name/tag> bash
The strange thing is that this client docker can connect to an external server (running on a production server) but cannot connect to the server docker running locally on the same host.
My understanding is that using default network type (bridge), all docker images can communicate together on the docker host and can also be accessed from outside.
What Am I missing ?
Thanks,
Context
I was planning on simplifying some development setup of multiple docker-compose.yml by introducing virtual hosts locally. I looked around and decided to use nginx-proxy for the reverse-proxy (ability to set VIRTUAL_HOST for each service).
Setup
To expose these on the host machine I went the route of dnsmasq and adding a /etc/resolver/test/ with nameserver 127.0.0.1.
I went and put the above into action using a dev/docker-compose.yml file:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
restart: 'always'
ports:
- "80:80"
- "443:443"
volumes:
- "/var/run/docker.sock:/tmp/docker.sock:ro"
dnsmasq:
image: andyshinn/dnsmasq
restart: 'always'
ports:
- "53:53/tcp"
- "53:53/udp"
cap_add:
- NET_ADMIN
command: --log-facility=-
volumes:
- ./data/dnsmasq.conf:/etc/dnsmasq.conf
- ./data/dnsmasq.d:/etc/dnsmasq.d
networks:
default:
external:
name: proxynet
The data/dnsmasq.conf file only contains address=/test/127.0.0.1.
I've also created an external network proxynet and use that as the default network for the docker-compose file(s) (docker network create proxynet). This then allows other docker-compose files and services to be linked to the proxy.
I have the following proj1/docker-compose.yml:
version: "3.5"
services:
proj1-web:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=proj1-web.test
networks:
default:
external:
name: proxynet
Having both these of these docker-compose files running (i.e., docker-compose up) I am able to access proj1-web.test from my local machine. Everything works as expected.
Now I want to be able to reference proj1-web.test in another container and have it resolve to the running container.
I'll create proj2/docker-compose.yml (similar to previous just different name):
version: "3.5"
services:
proj2-web:
image: jwilder/whoami
environment:
- VIRTUAL_HOST=proj2-web.test
networks:
default:
external:
name: proxynet
With everything running I can access both proj1-web.test and proj2-web.test from my local machine. I can successfully curl different services using between proj1 and proj2: docker-compose run proj1-web sh -c "apk update -qq; apk add curl -qq; curl -v proj2-web:8000".
Problem
The problem is that I cannot curl the virtual host's name proj2-web.test from proj1: docker-compose run proj1-web sh -c "apk update -qq; apk add curl -qq; curl -v proj2-web.test":
* Rebuilt URL to: proj2-web.test/
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to proj2-web.test port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to proj2-web.test port 80: Connection refused
Is there something I'm missing here? It appears the individual containers don't have access to the DNS being provided from dnsmasq to my local machine, I cannot figure out how to grant them that access. Maybe I'm going about this the wrong way -- I am open to suggestions.
I ended up creating a solution which addresses my question. You can see the repository here for the tool:
https://github.com/scoremedia/dcdc
I also created a blog post detailing a bit of this: https://kevinjalbert.com/docker-compose-dns-consistency-dcdc/
Hopefully this helps others.
I'm running a mongo instance with docker-compose and traefik.
myapp-mongo:
build: ../images/myapp-mongo
restart: always
ports:
- "27017:27017"
labels:
- "traefik.ports=27017,27018"
- "traefik.backend=myapp-mongo"
- "traefik.frontend.rule=Host:myapp-mongo.docker.localhost"
networks:
- development
environment:
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWD=${MONGO_PASSWD}
- MONGO_AUTHDB=${MONGO_AUTHDB}
Mongo is running fine and I can connect using 127.0.0.1 from my Mac.
The problem is that I can't connect using hostname myapp-mongo.docker.localhost. It only works using IP 127.0.0.1.
Trying to ping the IP 127.0.0.1 responds ok, but trying to ping the hostname doesn't work.
I've already added 127.0.0.1 proxy.docker.localhost into /etc/hosts to get traefik working.
All other web apps has hostnames working fine like eg myapp.docker.localhost. This problem is only happening with this mongodb container.
Probably because Træfik is HTTP proxy and so will only support HTTP/HTTPS connections.
I believe #bpatel is right (see comment I left on his answer with link to github conversation) Traefik at the time of writing only supports HTTP/HTTPS.
Solution using native docker networks
However, you can get around this issue! Since you are using docker, you can work around by using the container name in your code (assuming mongo and your mongo accessing code are both running in containers on a shared docker network. This will be the case if the containers are spun up with docker-compose). Run the following to see if your containers are linked up correctly:
run docker ps to get your container names running (under the NAMES column)
run docker network ls to see your network names
run docker network inspect <target_network_name> to verify your containers from step 1 are on the same network.
I run docker-compose from three separate compose files, so you should be able to cover most cases from the following (apologies for any syntax errors, the following are stripped down code examples):
Entire docker-compose file that that starts up traefik (under directory name 'proxy')
version: '2'
services:
traefik:
image: traefik
command: --web --docker --docker.domain=docker.localhost --logLevel=DEBUG
networks:
- webgateway
ports:
- "80:80"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /dev/null:/traefik.toml
networks:
webgateway:
driver: bridge
snippet from my docker-compose file that spins up mongo
version: '2'
services:
database:
image: mongo
ports:
- "27017:27017"
networks:
- web
networks:
web:
external:
name: proxy_webgateway
snippet from docker-compose that has mongo accessing code
version: '2'
services:
topicOntologyBuilder:
image: topic-ontology-builder
labels:
- "traefik.backend=topicOntologyBuilder"
- "traefik.port=80"
- "traefik.frontend.rule=Host:topic-ontology.docker.localhost"
networks:
- web
volumes:
- ./:/home
networks:
web:
external:
name: proxy_webgateway
Connection in Code
Not certain what language you're using, this is what the following js code looked like for me to connect to mongo (inside that 'topicOntologyBuilder` container, while using traefik as the proxy (again, this works because we're making the most of docker networks):
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect('mongodb://<MONGO_CONTAINER_NAME>/<DB_NAME>', function(err, db) {
//insert code here to interact with mongo
})
Why this works
This works because docker does some clever DNS stuff within the containers so that each container knows the IP of other containers, by looking it up in their DNS entry, by the container names
Extra intel
If your containers are on separate computers/vm's, you'll probably want to play around with a service discovery tool (Consul plays well with Traefik) or do something fancy with a docker network overlay which is specific for containers in a cluster.
If using raw docker networks, you can assign container aliases (this doesn't work with Traefik though, or at least it didn't a couple months back).
I'm using Docker for Mac. I have two containers.
1st: A PHP application that is attempting to connect to localhost:3306 to MySQL.
2nd: MySQL
When running with links, they are able to reach each other.
However, I would like to avoid changing any of the code in the PHP application (e.g. changing localhost to "mysql") and stay with using localhost.
Host networking seems to do the trick, the problem is, when I enable host networking I can't access the PHP application on port 80 on my host mac.
If I docker exec -it into the php application and curl localhost, i see the HTML, so it looks like the port is just not forwarding to the host machine?
this is an example for docker-compose
it runs mysql in one container and phpmyadmin in another
the containers are linked together
you can access the containers via your host machine on the ports
3316 and 8889
my_mysql:
image: mysql/mysql-server:latest
container_name: my_mysql
environment:
- MYSQL_ROOT_PASSWORD=1234
- MYSQL_DATABASE=test
- MYSQL_USER=test
- MYSQL_PASSWORD=test
ports:
- 3316:3306
restart: always
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: my_myadmin
links:
- my_mysql:my_mysql
environment:
- PMA_ARBITRARY=0
- PMA_HOST=my_mysql
ports:
- 8889:80
restart: always