Local hostnames for Docker containers - docker

Beginner Docker question here,
So I have a development environment in which I'm running a modular app, it is working using Docker Compose to run 3 containers: server, client, database.
The docker-compose.yml looks like this:
#############################
# Server
#############################
server:
container_name: server
domainname: server.dev
hostname: server
build: ./server
working_dir: /app
ports:
- "3000:3000"
volumes:
- ./server:/app
links:
- database
#############################
# Client
#############################
client:
container_name: client
domainname: client.dev
hostname: client
image: php:5.6-apache
ports:
- "80:80"
volumes:
- ./client:/var/www/html
#############################
# Database
#############################
database:
container_name: database
domainname: database.dev
hostname: database
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
ports:
- "5432:5432"
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
You can see I'm assigning a .dev domainname to each one, this works fine to see one machine from another one (Docker internal network), for example here I'm pinging server.dev from client.dev's CLI:
root#client:/var/www/html# ping server.dev
PING server.dev (127.0.53.53): 56 data bytes
64 bytes from 127.0.53.53: icmp_seq=0 ttl=64 time=0.036 ms
This works great internally, but not on my host OS network.
For convenience, I would like to assigns domains in MY local network, not the Docker containers network so that I can for example type: client.dev on my browsers URL and load the Docker container.
Right now, I can only access if I use the Docker IP, which is dynamic:
client: 192.168.99.100:80
server: 192.168.99.100:3000
database: 192.168.99.100:5432
Is there an automated/convenient way to do this that doesn't involve me manually adding the IP to my /etc/hosts file ?
BTW I'm on OSX if that has any relevance.
Thanks!
Edit: I found this Github issue which seems to be related: https://github.com/docker/docker/issues/2335
As far as I understood, they seem to say that it is something that is not available outside of the box and they suggest external tools like:
https://github.com/jpetazzo/pipework
https://github.com/bnfinet/docker-dns
https://github.com/gliderlabs/resolvable
Is that correct? And if so, which one should I go for in my particular scenario?

OK,
so since it seems that there is no native way to do this with Docker, I finally opted for this alternate solution from Ryan Armstrong, which consists in dynamically updating the /etc/hosts file.
I chose this since it was convenient for me since this works as a script, and I already had a startup script, so I could just append this function in to it.
The following example creates a hosts entry named docker.local which
will resolve to your docker-machine IP:
update-docker-host(){
# clear existing docker.local entry from /etc/hosts
sudo sed -i '' '/[[:space:]]docker\.local$/d' /etc/hosts
# get ip of running machine
export DOCKER_IP="$(echo ${DOCKER_HOST} | grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}')"
# update /etc/hosts with docker machine ip
[[ -n $DOCKER_IP ]] && sudo /bin/bash -c "echo \"${DOCKER_IP} docker.local\" >> /etc/hosts"
}
update-docker-host
This will automatically add or udpate the /etc/hosts line on my host OS when I start the Docker machine through my startup script.
Anyways, as I found out during my research, apart from editing the hosts file, you could also solve this problem by setting up a custom DNS server:
Also found several projects on Github which apparently aim to solve this problem, although I didn't try them:
https://github.com/jpetazzo/pipework
https://github.com/bnfinet/docker-dns
https://github.com/gliderlabs/resolvable

Extending on #eduwass's own answer, here's what I did manually (without a script).
As mentioned in the question, define the domainname: myapp.dev and hostname: www in the docker-compose.yml file
Bring up your Docker containers as normal
Run docker-compose exec client cat /etc/hosts to get an output of the container's hosts file (where client is your service name)
(Output example: 172.18.0.6 www.myapp.dev)
Open your local (host machine) /etc/hosts file and add that line: 172.18.0.6 server.server.dev
If your Docker service container changes IPs or does anything fancy you will want a more complex solution, but this is working for my simple needs at the moment.

Another solution would be to use a browser with a proxy extension sending the requests through a proxy container that will know where to resolve the domains to. If you consider using jwilder/nginx-proxy for production mode, then your issue can be easily solved with mitm-nginx-proxy-companion.
Here is an example based on your original stack:
version: '3.3'
services:
server:
build: ./server
working_dir: /app
volumes:
- ./server:/app
client:
environment:
- VIRTUAL_HOST: client.dev
image: php:5.6-apache
volumes:
- ./client:/var/www/html
database:
image: postgres:9.4
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=dbdev
- PG_TRUST_LOCALNET=true
volumes:
- ./database/scripts:/docker-entrypoint-initdb.d # init scripts
nginx-proxy:
image: jwilder/nginx-proxy
labels:
- "mitmproxy.proxyVirtualHosts=true"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-proxy-mitm:
dns:
- 127.0.0.1
image: artemkloko/mitm-nginx-proxy-companion
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
Run docker-compose up
Add a proxy extension to your browser, with proxy address being 127.0.0.1:8080
Access http://client.dev
The request will follow the route:
Access a local development domain in a browser
The proxy extension forwards that request to mitm-nginx-proxy-companion instead of the “real” internet
mitm-nginx-proxy-companion tries to resolve the domain name through the dns server in the same container
If the domain is not a “local” one, it will forward the request to the “real” internet
But if the domain is a “local” one, it will forward the request to the nginx-proxy
The nginx-proxy in its turn forwards the request to the appropriate container that includes the service we want to access
Side notes:
links removed as it's outdated and is replaced by Docker networks
you don't need to add domain names to server and database containers. client will be able to access them on server and database domains because they are all in the same network (similar to what link was doing previously)
you don't need to use ports on server and database containers because it only forwards ports to be used through 127.0.0.1. PHP in client container will do only "back-end" requests to other containers, and because those containers are in the same network, you already can access them with database:5432 and server:3000. The same goes for server <-> database connections.
I am the author of mitm-nginx-proxy-companion

In order to make whole domain for localhost you can use dnsmasq. In this case if you chose the domain .dev any subdomain will point to your container. But you have to know about problems with .dev zone
Or you can use bash script for launch your docker-compose which on start will add line to /etc/hosts and after you kill this process this line will removed
#!/usr/bin/env bash
sudo sed -i '1s;^;127.0.0.1 example.dev\n;' /etc/hosts
trap 'sudo sed -i "/example.dev/d" /etc/hosts' 2
docker-compose up

My Bash script WITH ALIAS without docker-machine
Based on http://cavaliercoder.com/blog/update-etc-hosts-for-docker-machine.html
#!/bin/bash
#alias
declare -A aliasArr
aliasArr[docker_name]="alias1,alias2"
# clear existing *.docker.local entries from /etc/hosts
sudo sed -i '/\.docker\.local$/d' /etc/hosts
# iterate over each machine
docker ps -a --format "{{.Names}}" \
| while read -r MACHINE; do
MACHINE_IP="$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ${MACHINE} 2>/dev/null)"
if [[ ${aliasArr[$MACHINE]} ]]
then
DOMAIN_NAME=$(echo ${aliasArr[$MACHINE]} | tr "," "\n")
else
DOMAIN_NAME=( ${MACHINE} )
fi
for addr in $DOMAIN_NAME
do
echo "add ${MACHINE_IP} ${addr}.docker.local"
[[ -n $MACHINE_IP ]] && sudo /bin/bash -c "echo \"${MACHINE_IP} ${addr}.docker.local\" >> /etc/hosts"
export no_proxy=$no_proxy,$MACHINE_IP
done
done

Related

Using custom local domain with Docker

I am running Docker using Docker Desktop on Windows.
I would like to set-up a simple server.
I run it using:
$ docker run -di -p 1234:80 yahya/example-server
This works as expected and runs fine on localhost:1234.
However, I want to give it's own local domain name (e.g. api.example.test), which should only be accessible locally.
Normally for a VM setup I would edit the Windows hosts file, get the IP address of the VM (let's say it's 192.168.90.90) and add something like the following:
192.168.90.90 api.example.test
How would I do something similar in Docker.
I know you can enter an ip address for port forwarding, but if I enter any local IP I get the following error:
$ docker run -di -p 192.168.90.90:1234:80 yahya/example-server
docker: Error response from daemon: Ports are not available: exposing port TCP 192.168.90.90:80 -> 0.0.0.0:0: listen tcp 192.168.90.90:80: can't bind on the specified endpoint.
However, it does work for 10.0.0.7 for some reason (I found this IP automatically added in the hosts file after installing Docker Desktop).
$ docker run -di -p 10.0.0.7:1234:80 yahya/example-server
This essentially solves the issue, but would become an issue again if I have more than 1 project.
Is there a way I can use another local IP address (preferably without a nginx proxy)?
I think there is no simple way to do this without some kind of reverse-proxy.
In my dev environment I use Traefik and dnscrypt-proxy to achieve automatic *.test domain names for multiple projects at same time
First, start Traefik proxy on ports 80 and 433, example docker-compose.yml:
---
networks:
traefik:
name: traefik
services:
traefik:
image: traefik:2.8.3
container_name: traefik
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik
ports:
- 80:80
- 443:443
environment:
TRAEFIK_API: 'true'
TRAEFIK_ENTRYPOINTS_http: 'true'
TRAEFIK_ENTRYPOINTS_http_ADDRESS: :80
TRAEFIK_ENTRYPOINTS_https: 'true'
TRAEFIK_ENTRYPOINTS_https_ADDRESS: :443
TRAEFIK_ENTRYPOINTS_https_HTTP_TLS: 'true'
TRAEFIK_GLOBAL_CHECKNEWVERSION: 'false'
TRAEFIK_GLOBAL_SENDANONYMOUSUSAGE: 'false'
TRAEFIK_PROVIDERS_DOCKER: 'true'
TRAEFIK_PROVIDERS_DOCKER_EXPOSEDBYDEFAULT: 'false'
Then, attach your service to traefik network, and set labels for routing (see Traefik & Docker). Example docker-compose.yml:
---
networks:
traefik:
external: true
services:
example:
image: yahya/example-server
restart: always
labels:
traefik.enable: true
traefik.docker.network: traefik
traefik.http.routers.example.rule: Host(`example.test`)
traefik.http.services.example.loadbalancer.server.port: 80
networks:
- traefik
Finally, add to hosts:
127.0.0.1 example.test
Instead of manually adding all future domains to hosts, you can setup local DNS resolver. I prefer to use cloaking feature of dnscrypt-proxy for this.
You can install it using Installation instructions, then uncomment following line in dnscrypt-proxy.toml:
cloaking_rules = 'cloaking-rules.txt'
and add to cloaking-rules.txt:
*.test 127.0.0.1
finally, setup your network connection to use 127.0.0.1 as DNS resolver

Pass current local ip to dnsmasq command in docker-compose

Setup
I have a setup with multiple containers, using dnsmasq as a nameserver for my virtual hosts. I want the containers to be accessible within my local network so I need to resolve all requests to the current local ip of the machine on which the containers are running on (here 192.168.178.21)
version: "3"
services:
dnsmasq:
image: andyshinn/dnsmasq
ports:
- 53:53/tcp
- 53:53/udp
cap_add:
- NET_ADMIN
command: [
"--log-queries",
"--log-facility=-",
"--address=/.test/192.168.178.21"
]
apache:
...
gulp:
...
nginx-proxy:
...
Issue
What I would like to do is to 'add' the current ip dynamically, in concept like a variable, that gets the current ip, when I start docker-compose:
...
"--address=/.test/current_local_ip"
...
This way I can start a project with this setup on every development machine in the network and make it reachable for others without manually changing things in the docker-compose file. Thanks for your suggestions
You can use .env file and add
env_file=.env
environment:
- IP_ADDR
and modify the command to
"--address=/.test/$IP_ADDR"
OR
map conf file like
volumes:
- .docker/dnsmasq.conf:/etc/dnsmasq.conf
I solved it using a makefile to pass an environment variable to docker-compose like this:
Makefile
LOCAL_IP := $(shell ipconfig getifaddr en0)
all:
make docker-start
docker-start:
LOCAL_IP=$(LOCAL_IP) docker-compose -f dev-docker-compose.yml up --detach
dev-docker-compose.yml
version: "3"
services:
dnsmasq:
image: andyshinn/dnsmasq
ports:
- 53:53/tcp
- 53:53/udp
cap_add:
- NET_ADMIN
command: [
"--log-queries",
"--log-facility=-",
"--address=/.test/${LOCAL_IP}"
]
...
The only issue I run into is that en0 is not always the desired ethernet adapter. Does anyone know a command that always gets the local ip regardless of the active adapter?

Running Ngrok in a container using docker

[https://github.com/gtriggiano/ngrok-tunnel ] runs ngrok inside a container. Ngrok is required to run in the container to avert security risks. But am facing problems after running the scripts, which generates the url
$ docker pull gtriggiano/ngrok-tunnel
$ docker run -it -e "TARGET_HOST=localhost" -e "TARGET_PORT=3000" -p 4040 gtriggiano/ngrok-tunnel
am running my rails app on localhost:3000
is it my problem or can it be fixed by altering the scripts(inside the repo)?
I couldn't get this working but switched to https://github.com/shkoliar/docker-ngrok and it works brilliantly.
In my case I added it to my docker-compose.yml file:
ngrok:
image: shkoliar/ngrok:latest
ports:
- 4551:4551
links:
- web
environment:
- PARAMS=http -region=eu -authtoken=${NGROK_AUTH_TOKEN} localdev.docker:80
networks:
dev_net:
ipv4_address: 10.5.0.10
And it's started with everything else when I do docker-compose up -d
Then there's a web UI at http://localhost:4551/ for you to see the status, requests, the ngrok URLs, etc.
The Github page does have examples of running it manually from the command line too though, rather than via docker-compose:
Command-line Example The example below assumes that you have running
web server docker container named dev_web_1 with exposed port 80.
docker run --rm -it --link dev_web_1 shkoliar/ngrok ngrok http dev_web_1:80
With command line usage, ngrok session is active until it
won't be terminated by Ctrl+C combination.
No. if you execute -p with single number it's container port - host port is randomly assigned.
Using -p, --publish ip:[hostPort]:containerPort at docker run can specify the the host port with the container port.
as of now the 4040 of container is exposed. Not sure if your service listens by default on it.
To get localhost port execute
docker ps
you'll see the actual port it's not listening on.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1aaaeffe789d gtriggiano/ngrok-tunnel "npm start" About a minute ago Up About a minute 0.0.0.0:32768->4040/tcp wizardly_poincare
here it's listening on localhost:32768
this composer works for me. Note that in the entrypoint command for ngrok you have to reference the other service by name
version: '3'
services:
yourwebserver:
build:
context: ./
dockerfile: ...
target: ...
container_name: yourwebserver
volumes:
- ...
ports:
- ...
extra_hosts:
- 'host.docker.internal:host-gateway'
depends_on:
- ngrok
ngrok:
image: ngrok/ngrok:alpine
environment:
NGROK_AUTHTOKEN: '...'
command: 'http yourwebserver:80'
ports:
- '4040:4040'
expose:
- '4040'
I'm not sure if you have already solved this but when I was getting this error I could only solve it like this:
# docker-compose.yml
networks:
- development
I also needed to expose the 3000 port of my web container because it still wasn't exposed.
# docker.compose.yml
web:
expose:
- "3000"
My container for the server running on development is also under the development network. The only parameters, I believe, you should pass for the container to execute are image, ports, environment with DOMAIN and PORT for the server container, a link, and an expose on your web container:
# docker-compose.yml
ngrok:
image: shkoliar/ngrok
ports:
- 4551:4551
links:
- web
networks:
- development
environment:
- DOMAIN=squad_web
- PORT=3000
Actually to make ngrok work with your docker container you can install it outside of your project just like the manual on their website says. And then add
nginx:
labels:
- "traefik.http.routers.${PROJECT_NAME}_nginx.rule=Host(`${PROJECT_BASE_URL}`, `aaa-abc-xxx-140-177.eu.ngrok.io`)"
This particular example is for docker4drupal docker-compose file and traefik mapped as 80:80

docker compose connect ssh tunnel server on load container

I usually do a:
ssh --oStrictHostKeyChecking=no -L 33333:remote_server user#domain.name -i /usr/lib/key.pem
from my local computer to a remote aws server.
Now I would like to connect AUTOMATICALLY from my ssh-container service in a container when the container is load. Is that possible? if so , what i'm doing wrong? how can I achieve this?
Container is loaded, if execute "ssh --oStrictHostKeyChecking=no -L 33333:remote_server user#domain.name -i /usr/lib/key.pem" from the shell it connect to my remote though
The reason why I'm trying this is because I have another service "phpmyadmin-container" which require that ssh connection to load mysql datadb through ssh tunnel.
Aside that I have another problem to solve with phpmyadmin(this if I go into ssh-container shell's I can see that I'm not connected.
Any help would be appreciated.
my yml file look like:
version: '3'
services:
phpmyadmin-container:
image: phpmyadmin/phpmyadmin
environment:
PMA_HOST: 127.0.0.1
PMA_PORT: 33333
PMA_USERNAME: user
PMA_USERNAME_PASSWORD: pass
ports:
- '8080:80'
ssh-container:
image: nazarpc/webserver:ssh-v1
restart: always
volumes:
# this where I have the key.pem (could be anywhere ,am i right?)
- /usr/lib:/usr/lib
command: ssh --oStrictHostKeyChecking=no -L 33333:remote_server user#domain.name -i /usr/lib/key.pem
#I have also use it the way ["ssh", "--oStrictHostKeyChecking=no" ...]

Connect from one Docker container to the other one

I am running a Java app inside a Docker container which is supposed to connect MySQL inside the other container. Trying multiple options suggested in the forms, nothing really works. Here is my Docker Compose file:
version: "3"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: app1
environment:
- DB_HOST=Imrans-MacBook-Pro.local
- DB_PORT=3306
ports:
- 8080:8080
networks:
- backend
depends_on:
- mysql
mysql:
image: mysql:5.7.20
hostname: mysql
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=app1
ports:
- 3306:3306
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp
networks:
- backend
networks:
backend:
driver: bridge
Where DB_HOST=Imrans-MacBook-Pro.local is my laptop's name. This did not work. Some suggest that the container name can be used so tried DB_HOST= mysql, never worked.
The only thing works from times to time when I pass the laptop's IP address, which is not I want to do. So, what is a good way to create communication between those containers?
The mysql is running in the container so there are two things that you should consider here:
If the mysql is running in the container then you will need to link the app container to the mysql container. This will allow them to talk to
each other using docker's inter container communication. The containers talk to each other using hostnames to resolve their respective internal IP addresses. See later in my answer I will show you how to get the two containers to communicate with each other using a compose file.
The mysql container should make use of a docker volume to store the database. This will allow you to store the database and related files on the file system of the host (server or machine where the containers are running on). The docker volume will then be mounted as a directory in the container. Thus the container can now read and write to a directory on the machine where the docker containers are running on. This means that even if the containers are all deleted or removed you will still have the database data persist. Here is a nice beginner friendly article on docker volumes and using them with MySQL:
https://severalnines.com/blog/mysql-docker-containers-understanding-basics
Container communication using only docker without compose:
You have container "app" and "mysql", you want to be able to access "app" on localhost and you want "app" to be able to connect to mysql. How are you gonna do this?
1. You need to expose a port for container "app" so we can access it on localhost. The docker containers have their own internal network and it is closed to you unless you expose some ports with docker.
You need to link the "mysql" container to "app" without exposing "mysql" 's ports to the rest of the world.
This config should work for what you want to achieve:
version: "2"
services:
app:
build:
context: ./
dockerfile: /src/main/docker/Dockerfile
image: app1:latest
links:
- mysql
environment:
- DB_HOST=mysql
# This is the hostname that app will reach the mysql container on.
# If you do with app container:
# docker exec -it <app container id> bash
# # apt-get update -y && apt-get install iputils-ping -y
#
# Then you should be able to ping mysql container with:
#
# # ping -c 2 mysql
- DB_PORT=3306
ports:
- 8080:8080
# You will access "app" on localhost:8080 in your browser. If this is running on your own machine.
mysql: #hostname actually gets set here so no need to set it later
image: mysql:5.7.20
environment:
- MYSQL_USER=root
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=app1
# Remember to use a volume if you would like this container's data to persist or if you would like
# to restore a database backup.
command: mysqld --lower_case_table_names=1 --skip-ssl --character_set_server=utf8 --explicit_defaults_for_timestamp
Now you can just start it up with:
$ docker-compose up
If you ran this before then just make sure to run this first before running docker-compose up:
$ docker-compose down
Let me know if that helps.
I have, in the past, gotten this to work without explicitly setting the host networking part in Docker Compose. Because Docker images inside a Docker Compose File are put into a Docker Network with each other, you really shouldn't have to do anything to get this to work: by default you should be able to attach into the container for your Spring app and be able to ping mysql and have it work out.
DB host should be localhost or 127.0.0.1

Resources