I am developing a service and using there docker compose to spin services like postgres, redis, elasticsearch. I have a web application that is based on RubyOnRails and writes and reads from all those services.
Here is my docker-compose.yml
version: '2'
services:
redis:
image: redis:2.8
networks:
- frontapp
elasticsearch:
image: elasticsearch:2.2
networks:
- frontapp
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: elephant
POSTGRES_PASSWORD: smarty_pants
POSTGRES_DB: elephant
volumes:
- /var/lib/postgresql/data
networks:
- frontapp
networks:
frontapp:
driver: bridge
And i can ping containers within this network
$ docker-compose run redis /bin/bash
root#777501e06c03:/data# ping postgres
PING postgres (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: icmp_seq=0 ttl=64 time=0.346 ms
64 bytes from 172.20.0.2: icmp_seq=1 ttl=64 time=0.047 ms
...
So far so good. Now I want to run ruby on rails application on my host machine but be able to access postgres instance with url like postgresql://username:password#postgres/database currently that is not possible
$ ping postgres
ping: unknown host postgres
I can see my network in docker
$ docker network ls
NETWORK ID NAME DRIVER
ac394b85ce09 bridge bridge
0189d7e86b33 elephant_default bridge
7e00c70bde3b elephant_frontapp bridge
a648554a72fa host host
4ad9f0f41b36 none null
And I can see an interface to it
$ ifconfig
br-0189d7e86b33 Link encap:Ethernet HWaddr 02:42:76:72:bb:c2
inet addr:172.18.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:76ff:fe72:bbc2/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:36 errors:0 dropped:0 overruns:0 frame:0
TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2000 (2.0 KB) TX bytes:8792 (8.7 KB)
br-7e00c70bde3b Link encap:Ethernet HWaddr 02:42:e7:d1:fe:29
inet addr:172.20.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:e7ff:fed1:fe29/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1584 errors:0 dropped:0 overruns:0 frame:0
TX packets:1597 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:407137 (407.1 KB) TX bytes:292299 (292.2 KB)
...
But i am not sure what should I do next. I tried to play a bit with /etc/resolv.conf, mainly with nameserver directive, but that had no effect.
I would appreciate any help of suggestions how to configure this setup correctly.
UPDATE
After browsing through Internet resources I managed to assign static IP addresses to boxes. For now it is enough for me to continue development. Here is my current docker-compose.yml
version: '2'
services:
redis:
image: redis:2.8
networks:
frontapp:
ipv4_address: 172.25.0.11
elasticsearch:
image: elasticsearch:2.2
networks:
frontapp:
ipv4_address: 172.25.0.12
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: elephant
POSTGRES_PASSWORD: smarty_pants
POSTGRES_DB: elephant
volumes:
- /var/lib/postgresql/data
networks:
frontapp:
ipv4_address: 172.25.0.10
networks:
frontapp:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.25.0.0/16
gateway: 172.25.0.1
There is a opensource application that solves this issue, it's called DNS Proxy Server, here some examples from official repository
It's a DNS server that solves containers hostnames, if could not found a hostname that matches then solve it from internet as well
Start DNS Server
$ docker run --hostname dns.mageddo --restart=unless-stopped -p 5380:5380 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/resolv.conf:/etc/resolv.conf \
defreitas/dns-proxy-server
It will be set automatically as your default DNS (and recover to the original when it stops)
Creating some containers for test
checking docker-compose file
$ cat docker-compose.yml
version: '3'
services:
nginx-1:
image: nginx
hostname: nginx-1.docker
network_mode: bridge
linux-1:
image: alpine
hostname: linux-1.docker
command: sh -c 'apk add --update bind-tools && tail -f /dev/null'
network_mode: bridge # that way he can solve others containers names even inside, solve nginx-2, for example
starting containers
$ docker-compose up
Solving containers
from host
nslookup nginx-1.docker
Server: 13.0.0.5
Address: 13.0.0.5#53
Non-authoritative answer:
Name: nginx-1.docker
Address: 13.0.0.6
from another container
$ docker-compose exec linux-1 ping nginx-1.docker
PING nginx-1.docker (13.0.0.6): 56 data bytes
64 bytes from 13.0.0.6: seq=0 ttl=64 time=0.034 ms
As well it solves internet hostnames
$ nslookup google.com
Server: 13.0.0.5
Address: 13.0.0.5#53
Non-authoritative answer:
Name: google.com
Address: 216.58.202.78
I'm using a bash script to update /etc/hosts. Why this solution?
Short script, easy to review (didn't want to give some un-reviewed application with lots of dependencies access to the Docker socket (which means root access))
It uses docker events to run every time a container is started or stopped (other solutions posted here run every second in a loop, which is way less efficient)
Updates /etc/hosts, no separate DNS server needed.
Only dependencies are bash, mktemp, grep, xargs, sed, jq and docker, all of which I had already installed.
Just put the script somewhere, e.g. /usr/local/bin/docker-update-hosts:
#!/usr/bin/env bash
set -e -u -o pipefail
hosts_file=/etc/hosts
begin_block="# BEGIN DOCKER CONTAINERS"
end_block="# END DOCKER CONTAINERS"
if ! grep -Fxq "$begin_block" "$hosts_file"; then
echo -e "\n${begin_block}\n${end_block}\n" >> "$hosts_file"
fi
(echo "| container start |" && docker events) | \
while read event; do
if [[ "$event" == *" container start "* ]] || [[ "$event" == *" network disconnect "* ]]; then
hosts_file_tmp="$(mktemp)"
docker container ls -q | xargs -r docker container inspect | \
jq -r '.[]|"\(.NetworkSettings.Networks[].IPAddress|select(length > 0) // "# no ip address:") \(.Name|sub("^/"; "")|sub("_1$"; ""))"' | \
sed -ne "/^${begin_block}$/ {p; r /dev/stdin" -e ":a; n; /^${end_block}$/ {p; b}; ba}; p" "$hosts_file" \
> "$hosts_file_tmp"
chmod 644 "$hosts_file_tmp"
mv "$hosts_file_tmp" "$hosts_file"
fi
done
Note: The script removes the _1 suffix added by docker-compose from container names. If you don't want that just remove |sub("_1$"; "") from the script.
You can use a systemd service to run this synchronously with Docker: /etc/systemd/system/docker-update-hosts.service:
[Unit]
Description=Update Docker containers in /etc/hosts
Requires=docker.service
After=docker.service
PartOf=docker.service
[Service]
ExecStart=/usr/local/bin/docker-update-hosts
[Install]
WantedBy=docker.service
To activate, run:
systemctl daemon-reload
systemctl enable docker-update-hosts.service
systemctl start docker-update-hosts.service
If you're only using you docker-compose setup locally you could map the ports from your containers to your host with
elasticsearch:
image: elasticsearch:2.2
ports:
- 9300:9300
- 9200:9200
Then use localhost:9300 (or 9200 depending on protocol) from your web-app to access Elasticsearch.
A more complex solution is to run your own dns that resolve container names. I think that this solution is a lot closer to what you're asking for. I have previsously used skydns when running kubernetes locally.
There are a few options out there. Have a look at https://github.com/gliderlabs/registrator and https://github.com/jderusse/docker-dns-gen. I didn't try it, but you could potentially map the dns port to your host in the same way as with the elastic ports in the previous example and then add localhost to your resolv.conf to be able to resolve your container names from your host.
There are two solutions (except /etc/hosts) described here and here
I wrote my own solution in Python and implemented it as service to provide mapping from container hostname to its IP. Here it is: https://github.com/nicolai-budico/dockerhosts
It launches dnsmasq with parameter --hostsdir=/var/run/docker-hosts and updates file /var/run/docker-hosts/hosts each time a list of running containers was changed.
Once file /var/run/docker-hosts/hosts is changed, dnsmasq automatically updates its mapping and container become available by hostname in a second.
$ docker run -d --hostname=myapp.local.com --rm -it ubuntu:17.10
9af0b6a89feee747151007214b4e24b8ec7c9b2858badff6d584110bed45b740
$ nslookup myapp.local.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: myapp.local.com
Address: 172.17.0.2
There are install and uninstall scripts. Only you need is to allow your system to interact with this dnsmasq instance. I registered in in systemd-resolved:
$ cat /etc/systemd/resolved.conf
[Resolve]
DNS=127.0.0.54
#FallbackDNS=
#Domains=
#LLMNR=yes
#MulticastDNS=yes
#DNSSEC=no
#Cache=yes
#DNSStubListener=udp
hostname of the docker container cannot be seen from outside. What you can do is to assign a name to container and access the container through the name. If you link 2 containers say container1 and container2 then docker takes care of writing the IP and the hostname of container2 in the container1. However in your case your application is running in the hostmachine.
OR
You know the IP of the container. So in your host machine's /etc/hosts you can add $IP $hostanameof container
Aditya is correct. In your case the simplest is to hard code your hostname / IP maping in /etc/hosts
The problem with this approach, however, is that you do not control the private IP address your postgres machine will have. IP address will change every time you start a new container, and so you will need to update your /etc/hosts file.
If that's an issue, I would recommend that you read this blog post that explains how to enforce that a container get a specific IP address:
https://xand.es/2016/05/09/docker-with-known-ip/
You could Dockerize the RoR app / or any other app that needs access to the containers.
I know, this is a trivial solution, but let me explain:
I wanted a similar thing, but for a different reason. I am implementing SSO via SAML and wanted to create a dev environment, where I can test the solution. Originally I wanted to run the browser on the host machine, but as I had problems with accessing arbitrary ports from the host on clients, and also the DNS based solution by deFreitas is not working on Mac, I realized, I could dockerize a browser:
docker run --rm -p 8085:8085 chadmoon/gtk3-docker
See: https://github.com/moondev/gtk3-docker for details.
Related
For a testing situation, I've created a docker-compose file which stands up two containers:
version: "3.8"
services:
foo:
ports:
- 5000:5000/tcp
networks:
- default
...
bar:
networks:
- default
...
networks:
default:
ipam:
driver: internal
config:
- subnet: 172.100.0.0/16
For the purposes of my tests, I need bar to be unreachable from the host. Unfortunately, when I run docker-compose up, ip addr shows that my host is on that network at 172.100.0.1.
How do I change my YAML file to accomplish this?
Caveat The address range you've selected (172.100.0.0/16) is a routeable address range that apparently belongs to charter communications. This means that attempts to reach addresses in that range -- if there's not a specific route to your containers -- will get routed via your default gateway and potentially to actual machines somewhere on the internet.
Anyway, to your question:
I don't believe this is going to be possible using only Docker. For a bridged network, Docker will (a) always create a host bridge and (b) always assign an ip address to that bridge, which means containers will always be routable from the host.
You can create containers that have no network connectivity by using the none network:
version: "3.8"
services:
foo:
network_mode: none
image: docker.io/alpinelinux/darkhttpd
bar:
network_mode: none
image: docker.io/alpinelinux/darkhttpd
And then manually wire them up to a bridge using ip commands:
ip link add br-internal type bridge
ip link set br-internal up
foo_pid="$(docker inspect project_foo_1 --format '{{ .State.Pid }}')"
bar_pid="$(docker inspect project_bar_1 --format '{{ .State.Pid }}')"
ln -s /proc/$foo_pid/ns/net /run/netns/foo
ln -s /proc/$bar_pid/ns/net /run/netns/bar
ip link add foo-ext type veth peer name foo-int netns foo
ip -n foo addr add 172.100.0.10/26 dev foo-int
ip -n foo link set foo-int up
ip link set foo-ext up master br-internal
ip link add bar-ext type veth peer name bar-int netns bar
ip -n bar addr add 172.100.0.11/26 dev bar-int
ip -n bar link set bar-int up
ip link set bar-ext up master br-internal
Now we can ping bar from foo:
$ docker compose exec foo ping -c1 172.100.0.11
PING 172.100.0.11 (172.100.0.11): 56 data bytes
64 bytes from 172.100.0.11: seq=0 ttl=42 time=0.080 ms
--- 172.100.0.11 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.080/0.080/0.080 ms
And foo from bar:
$ docker compose exec bar ping -c1 172.100.0.10
PING 172.100.0.10 (172.100.0.10): 56 data bytes
64 bytes from 172.100.0.10: seq=0 ttl=42 time=0.047 ms
--- 172.100.0.10 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.047/0.047/0.047 ms
But our host doesn't have a route to either of these addresses:
$ ip route get 172.100.0.10
172.100.0.10 via 192.168.1.1 dev eth0 src 192.168.1.175 uid 1000
cache
That shows that attempts to reach addresses on this network will get routed via our default gateway, rather than to the containers.
I am developing a service and using there docker compose to spin services like postgres, redis, elasticsearch. I have a web application that is based on RubyOnRails and writes and reads from all those services.
Here is my docker-compose.yml
version: '2'
services:
redis:
image: redis:2.8
networks:
- frontapp
elasticsearch:
image: elasticsearch:2.2
networks:
- frontapp
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: elephant
POSTGRES_PASSWORD: smarty_pants
POSTGRES_DB: elephant
volumes:
- /var/lib/postgresql/data
networks:
- frontapp
networks:
frontapp:
driver: bridge
And i can ping containers within this network
$ docker-compose run redis /bin/bash
root#777501e06c03:/data# ping postgres
PING postgres (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: icmp_seq=0 ttl=64 time=0.346 ms
64 bytes from 172.20.0.2: icmp_seq=1 ttl=64 time=0.047 ms
...
So far so good. Now I want to run ruby on rails application on my host machine but be able to access postgres instance with url like postgresql://username:password#postgres/database currently that is not possible
$ ping postgres
ping: unknown host postgres
I can see my network in docker
$ docker network ls
NETWORK ID NAME DRIVER
ac394b85ce09 bridge bridge
0189d7e86b33 elephant_default bridge
7e00c70bde3b elephant_frontapp bridge
a648554a72fa host host
4ad9f0f41b36 none null
And I can see an interface to it
$ ifconfig
br-0189d7e86b33 Link encap:Ethernet HWaddr 02:42:76:72:bb:c2
inet addr:172.18.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:76ff:fe72:bbc2/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:36 errors:0 dropped:0 overruns:0 frame:0
TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2000 (2.0 KB) TX bytes:8792 (8.7 KB)
br-7e00c70bde3b Link encap:Ethernet HWaddr 02:42:e7:d1:fe:29
inet addr:172.20.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:e7ff:fed1:fe29/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1584 errors:0 dropped:0 overruns:0 frame:0
TX packets:1597 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:407137 (407.1 KB) TX bytes:292299 (292.2 KB)
...
But i am not sure what should I do next. I tried to play a bit with /etc/resolv.conf, mainly with nameserver directive, but that had no effect.
I would appreciate any help of suggestions how to configure this setup correctly.
UPDATE
After browsing through Internet resources I managed to assign static IP addresses to boxes. For now it is enough for me to continue development. Here is my current docker-compose.yml
version: '2'
services:
redis:
image: redis:2.8
networks:
frontapp:
ipv4_address: 172.25.0.11
elasticsearch:
image: elasticsearch:2.2
networks:
frontapp:
ipv4_address: 172.25.0.12
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: elephant
POSTGRES_PASSWORD: smarty_pants
POSTGRES_DB: elephant
volumes:
- /var/lib/postgresql/data
networks:
frontapp:
ipv4_address: 172.25.0.10
networks:
frontapp:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.25.0.0/16
gateway: 172.25.0.1
There is a opensource application that solves this issue, it's called DNS Proxy Server, here some examples from official repository
It's a DNS server that solves containers hostnames, if could not found a hostname that matches then solve it from internet as well
Start DNS Server
$ docker run --hostname dns.mageddo --restart=unless-stopped -p 5380:5380 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/resolv.conf:/etc/resolv.conf \
defreitas/dns-proxy-server
It will be set automatically as your default DNS (and recover to the original when it stops)
Creating some containers for test
checking docker-compose file
$ cat docker-compose.yml
version: '3'
services:
nginx-1:
image: nginx
hostname: nginx-1.docker
network_mode: bridge
linux-1:
image: alpine
hostname: linux-1.docker
command: sh -c 'apk add --update bind-tools && tail -f /dev/null'
network_mode: bridge # that way he can solve others containers names even inside, solve nginx-2, for example
starting containers
$ docker-compose up
Solving containers
from host
nslookup nginx-1.docker
Server: 13.0.0.5
Address: 13.0.0.5#53
Non-authoritative answer:
Name: nginx-1.docker
Address: 13.0.0.6
from another container
$ docker-compose exec linux-1 ping nginx-1.docker
PING nginx-1.docker (13.0.0.6): 56 data bytes
64 bytes from 13.0.0.6: seq=0 ttl=64 time=0.034 ms
As well it solves internet hostnames
$ nslookup google.com
Server: 13.0.0.5
Address: 13.0.0.5#53
Non-authoritative answer:
Name: google.com
Address: 216.58.202.78
I'm using a bash script to update /etc/hosts. Why this solution?
Short script, easy to review (didn't want to give some un-reviewed application with lots of dependencies access to the Docker socket (which means root access))
It uses docker events to run every time a container is started or stopped (other solutions posted here run every second in a loop, which is way less efficient)
Updates /etc/hosts, no separate DNS server needed.
Only dependencies are bash, mktemp, grep, xargs, sed, jq and docker, all of which I had already installed.
Just put the script somewhere, e.g. /usr/local/bin/docker-update-hosts:
#!/usr/bin/env bash
set -e -u -o pipefail
hosts_file=/etc/hosts
begin_block="# BEGIN DOCKER CONTAINERS"
end_block="# END DOCKER CONTAINERS"
if ! grep -Fxq "$begin_block" "$hosts_file"; then
echo -e "\n${begin_block}\n${end_block}\n" >> "$hosts_file"
fi
(echo "| container start |" && docker events) | \
while read event; do
if [[ "$event" == *" container start "* ]] || [[ "$event" == *" network disconnect "* ]]; then
hosts_file_tmp="$(mktemp)"
docker container ls -q | xargs -r docker container inspect | \
jq -r '.[]|"\(.NetworkSettings.Networks[].IPAddress|select(length > 0) // "# no ip address:") \(.Name|sub("^/"; "")|sub("_1$"; ""))"' | \
sed -ne "/^${begin_block}$/ {p; r /dev/stdin" -e ":a; n; /^${end_block}$/ {p; b}; ba}; p" "$hosts_file" \
> "$hosts_file_tmp"
chmod 644 "$hosts_file_tmp"
mv "$hosts_file_tmp" "$hosts_file"
fi
done
Note: The script removes the _1 suffix added by docker-compose from container names. If you don't want that just remove |sub("_1$"; "") from the script.
You can use a systemd service to run this synchronously with Docker: /etc/systemd/system/docker-update-hosts.service:
[Unit]
Description=Update Docker containers in /etc/hosts
Requires=docker.service
After=docker.service
PartOf=docker.service
[Service]
ExecStart=/usr/local/bin/docker-update-hosts
[Install]
WantedBy=docker.service
To activate, run:
systemctl daemon-reload
systemctl enable docker-update-hosts.service
systemctl start docker-update-hosts.service
If you're only using you docker-compose setup locally you could map the ports from your containers to your host with
elasticsearch:
image: elasticsearch:2.2
ports:
- 9300:9300
- 9200:9200
Then use localhost:9300 (or 9200 depending on protocol) from your web-app to access Elasticsearch.
A more complex solution is to run your own dns that resolve container names. I think that this solution is a lot closer to what you're asking for. I have previsously used skydns when running kubernetes locally.
There are a few options out there. Have a look at https://github.com/gliderlabs/registrator and https://github.com/jderusse/docker-dns-gen. I didn't try it, but you could potentially map the dns port to your host in the same way as with the elastic ports in the previous example and then add localhost to your resolv.conf to be able to resolve your container names from your host.
There are two solutions (except /etc/hosts) described here and here
I wrote my own solution in Python and implemented it as service to provide mapping from container hostname to its IP. Here it is: https://github.com/nicolai-budico/dockerhosts
It launches dnsmasq with parameter --hostsdir=/var/run/docker-hosts and updates file /var/run/docker-hosts/hosts each time a list of running containers was changed.
Once file /var/run/docker-hosts/hosts is changed, dnsmasq automatically updates its mapping and container become available by hostname in a second.
$ docker run -d --hostname=myapp.local.com --rm -it ubuntu:17.10
9af0b6a89feee747151007214b4e24b8ec7c9b2858badff6d584110bed45b740
$ nslookup myapp.local.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: myapp.local.com
Address: 172.17.0.2
There are install and uninstall scripts. Only you need is to allow your system to interact with this dnsmasq instance. I registered in in systemd-resolved:
$ cat /etc/systemd/resolved.conf
[Resolve]
DNS=127.0.0.54
#FallbackDNS=
#Domains=
#LLMNR=yes
#MulticastDNS=yes
#DNSSEC=no
#Cache=yes
#DNSStubListener=udp
hostname of the docker container cannot be seen from outside. What you can do is to assign a name to container and access the container through the name. If you link 2 containers say container1 and container2 then docker takes care of writing the IP and the hostname of container2 in the container1. However in your case your application is running in the hostmachine.
OR
You know the IP of the container. So in your host machine's /etc/hosts you can add $IP $hostanameof container
Aditya is correct. In your case the simplest is to hard code your hostname / IP maping in /etc/hosts
The problem with this approach, however, is that you do not control the private IP address your postgres machine will have. IP address will change every time you start a new container, and so you will need to update your /etc/hosts file.
If that's an issue, I would recommend that you read this blog post that explains how to enforce that a container get a specific IP address:
https://xand.es/2016/05/09/docker-with-known-ip/
You could Dockerize the RoR app / or any other app that needs access to the containers.
I know, this is a trivial solution, but let me explain:
I wanted a similar thing, but for a different reason. I am implementing SSO via SAML and wanted to create a dev environment, where I can test the solution. Originally I wanted to run the browser on the host machine, but as I had problems with accessing arbitrary ports from the host on clients, and also the DNS based solution by deFreitas is not working on Mac, I realized, I could dockerize a browser:
docker run --rm -p 8085:8085 chadmoon/gtk3-docker
See: https://github.com/moondev/gtk3-docker for details.
I am developing a service and using there docker compose to spin services like postgres, redis, elasticsearch. I have a web application that is based on RubyOnRails and writes and reads from all those services.
Here is my docker-compose.yml
version: '2'
services:
redis:
image: redis:2.8
networks:
- frontapp
elasticsearch:
image: elasticsearch:2.2
networks:
- frontapp
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: elephant
POSTGRES_PASSWORD: smarty_pants
POSTGRES_DB: elephant
volumes:
- /var/lib/postgresql/data
networks:
- frontapp
networks:
frontapp:
driver: bridge
And i can ping containers within this network
$ docker-compose run redis /bin/bash
root#777501e06c03:/data# ping postgres
PING postgres (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: icmp_seq=0 ttl=64 time=0.346 ms
64 bytes from 172.20.0.2: icmp_seq=1 ttl=64 time=0.047 ms
...
So far so good. Now I want to run ruby on rails application on my host machine but be able to access postgres instance with url like postgresql://username:password#postgres/database currently that is not possible
$ ping postgres
ping: unknown host postgres
I can see my network in docker
$ docker network ls
NETWORK ID NAME DRIVER
ac394b85ce09 bridge bridge
0189d7e86b33 elephant_default bridge
7e00c70bde3b elephant_frontapp bridge
a648554a72fa host host
4ad9f0f41b36 none null
And I can see an interface to it
$ ifconfig
br-0189d7e86b33 Link encap:Ethernet HWaddr 02:42:76:72:bb:c2
inet addr:172.18.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:76ff:fe72:bbc2/64 Scope:Link
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:36 errors:0 dropped:0 overruns:0 frame:0
TX packets:60 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2000 (2.0 KB) TX bytes:8792 (8.7 KB)
br-7e00c70bde3b Link encap:Ethernet HWaddr 02:42:e7:d1:fe:29
inet addr:172.20.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
inet6 addr: fe80::42:e7ff:fed1:fe29/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1584 errors:0 dropped:0 overruns:0 frame:0
TX packets:1597 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:407137 (407.1 KB) TX bytes:292299 (292.2 KB)
...
But i am not sure what should I do next. I tried to play a bit with /etc/resolv.conf, mainly with nameserver directive, but that had no effect.
I would appreciate any help of suggestions how to configure this setup correctly.
UPDATE
After browsing through Internet resources I managed to assign static IP addresses to boxes. For now it is enough for me to continue development. Here is my current docker-compose.yml
version: '2'
services:
redis:
image: redis:2.8
networks:
frontapp:
ipv4_address: 172.25.0.11
elasticsearch:
image: elasticsearch:2.2
networks:
frontapp:
ipv4_address: 172.25.0.12
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: elephant
POSTGRES_PASSWORD: smarty_pants
POSTGRES_DB: elephant
volumes:
- /var/lib/postgresql/data
networks:
frontapp:
ipv4_address: 172.25.0.10
networks:
frontapp:
driver: bridge
ipam:
driver: default
config:
- subnet: 172.25.0.0/16
gateway: 172.25.0.1
There is a opensource application that solves this issue, it's called DNS Proxy Server, here some examples from official repository
It's a DNS server that solves containers hostnames, if could not found a hostname that matches then solve it from internet as well
Start DNS Server
$ docker run --hostname dns.mageddo --restart=unless-stopped -p 5380:5380 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /etc/resolv.conf:/etc/resolv.conf \
defreitas/dns-proxy-server
It will be set automatically as your default DNS (and recover to the original when it stops)
Creating some containers for test
checking docker-compose file
$ cat docker-compose.yml
version: '3'
services:
nginx-1:
image: nginx
hostname: nginx-1.docker
network_mode: bridge
linux-1:
image: alpine
hostname: linux-1.docker
command: sh -c 'apk add --update bind-tools && tail -f /dev/null'
network_mode: bridge # that way he can solve others containers names even inside, solve nginx-2, for example
starting containers
$ docker-compose up
Solving containers
from host
nslookup nginx-1.docker
Server: 13.0.0.5
Address: 13.0.0.5#53
Non-authoritative answer:
Name: nginx-1.docker
Address: 13.0.0.6
from another container
$ docker-compose exec linux-1 ping nginx-1.docker
PING nginx-1.docker (13.0.0.6): 56 data bytes
64 bytes from 13.0.0.6: seq=0 ttl=64 time=0.034 ms
As well it solves internet hostnames
$ nslookup google.com
Server: 13.0.0.5
Address: 13.0.0.5#53
Non-authoritative answer:
Name: google.com
Address: 216.58.202.78
I'm using a bash script to update /etc/hosts. Why this solution?
Short script, easy to review (didn't want to give some un-reviewed application with lots of dependencies access to the Docker socket (which means root access))
It uses docker events to run every time a container is started or stopped (other solutions posted here run every second in a loop, which is way less efficient)
Updates /etc/hosts, no separate DNS server needed.
Only dependencies are bash, mktemp, grep, xargs, sed, jq and docker, all of which I had already installed.
Just put the script somewhere, e.g. /usr/local/bin/docker-update-hosts:
#!/usr/bin/env bash
set -e -u -o pipefail
hosts_file=/etc/hosts
begin_block="# BEGIN DOCKER CONTAINERS"
end_block="# END DOCKER CONTAINERS"
if ! grep -Fxq "$begin_block" "$hosts_file"; then
echo -e "\n${begin_block}\n${end_block}\n" >> "$hosts_file"
fi
(echo "| container start |" && docker events) | \
while read event; do
if [[ "$event" == *" container start "* ]] || [[ "$event" == *" network disconnect "* ]]; then
hosts_file_tmp="$(mktemp)"
docker container ls -q | xargs -r docker container inspect | \
jq -r '.[]|"\(.NetworkSettings.Networks[].IPAddress|select(length > 0) // "# no ip address:") \(.Name|sub("^/"; "")|sub("_1$"; ""))"' | \
sed -ne "/^${begin_block}$/ {p; r /dev/stdin" -e ":a; n; /^${end_block}$/ {p; b}; ba}; p" "$hosts_file" \
> "$hosts_file_tmp"
chmod 644 "$hosts_file_tmp"
mv "$hosts_file_tmp" "$hosts_file"
fi
done
Note: The script removes the _1 suffix added by docker-compose from container names. If you don't want that just remove |sub("_1$"; "") from the script.
You can use a systemd service to run this synchronously with Docker: /etc/systemd/system/docker-update-hosts.service:
[Unit]
Description=Update Docker containers in /etc/hosts
Requires=docker.service
After=docker.service
PartOf=docker.service
[Service]
ExecStart=/usr/local/bin/docker-update-hosts
[Install]
WantedBy=docker.service
To activate, run:
systemctl daemon-reload
systemctl enable docker-update-hosts.service
systemctl start docker-update-hosts.service
If you're only using you docker-compose setup locally you could map the ports from your containers to your host with
elasticsearch:
image: elasticsearch:2.2
ports:
- 9300:9300
- 9200:9200
Then use localhost:9300 (or 9200 depending on protocol) from your web-app to access Elasticsearch.
A more complex solution is to run your own dns that resolve container names. I think that this solution is a lot closer to what you're asking for. I have previsously used skydns when running kubernetes locally.
There are a few options out there. Have a look at https://github.com/gliderlabs/registrator and https://github.com/jderusse/docker-dns-gen. I didn't try it, but you could potentially map the dns port to your host in the same way as with the elastic ports in the previous example and then add localhost to your resolv.conf to be able to resolve your container names from your host.
There are two solutions (except /etc/hosts) described here and here
I wrote my own solution in Python and implemented it as service to provide mapping from container hostname to its IP. Here it is: https://github.com/nicolai-budico/dockerhosts
It launches dnsmasq with parameter --hostsdir=/var/run/docker-hosts and updates file /var/run/docker-hosts/hosts each time a list of running containers was changed.
Once file /var/run/docker-hosts/hosts is changed, dnsmasq automatically updates its mapping and container become available by hostname in a second.
$ docker run -d --hostname=myapp.local.com --rm -it ubuntu:17.10
9af0b6a89feee747151007214b4e24b8ec7c9b2858badff6d584110bed45b740
$ nslookup myapp.local.com
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: myapp.local.com
Address: 172.17.0.2
There are install and uninstall scripts. Only you need is to allow your system to interact with this dnsmasq instance. I registered in in systemd-resolved:
$ cat /etc/systemd/resolved.conf
[Resolve]
DNS=127.0.0.54
#FallbackDNS=
#Domains=
#LLMNR=yes
#MulticastDNS=yes
#DNSSEC=no
#Cache=yes
#DNSStubListener=udp
hostname of the docker container cannot be seen from outside. What you can do is to assign a name to container and access the container through the name. If you link 2 containers say container1 and container2 then docker takes care of writing the IP and the hostname of container2 in the container1. However in your case your application is running in the hostmachine.
OR
You know the IP of the container. So in your host machine's /etc/hosts you can add $IP $hostanameof container
Aditya is correct. In your case the simplest is to hard code your hostname / IP maping in /etc/hosts
The problem with this approach, however, is that you do not control the private IP address your postgres machine will have. IP address will change every time you start a new container, and so you will need to update your /etc/hosts file.
If that's an issue, I would recommend that you read this blog post that explains how to enforce that a container get a specific IP address:
https://xand.es/2016/05/09/docker-with-known-ip/
You could Dockerize the RoR app / or any other app that needs access to the containers.
I know, this is a trivial solution, but let me explain:
I wanted a similar thing, but for a different reason. I am implementing SSO via SAML and wanted to create a dev environment, where I can test the solution. Originally I wanted to run the browser on the host machine, but as I had problems with accessing arbitrary ports from the host on clients, and also the DNS based solution by deFreitas is not working on Mac, I realized, I could dockerize a browser:
docker run --rm -p 8085:8085 chadmoon/gtk3-docker
See: https://github.com/moondev/gtk3-docker for details.
I am creating a container with following command
docker run -it -p 81:80 -p 3307:3306 --net mynet123 --ip 172.18.0.22 -v /opt/lampp/htdocs:/var/www/html lamp-setia bash
Can Someone share the solution?
Thanks In Advance
You can check the existing port by running command
lsof -i tcp:81
and
lsof -i tcp:3307
if necessary you can kill that process with command
kill -9 [pid number]
After that, you can try to re-run that docker command.
Another scenario that have the exact same error is when the IP address is in use. In my setup, I had a network setup like this:
docker network create --subnet 172.28.5.0/24 cluster-test-net
and I was trying to start my docker container as below:
docker run -d --name wildfly1 --ip 172.28.5.1 -h wildfly1 -p 8080:8080 -p 9990:9990 --network=cluster-test-net wildfly-cluster-image
The reason that I got the error was that docker had already assigned the IP address 172.28.5.1 to the host itself. I noticed that when I ran ifconfig on my host and found this row in the result:
br-bb89994f6a73: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.28.5.1 netmask 255.255.255.0 broadcast 172.28.5.255
inet6 fe80::42:a2ff:fecd:81e9 prefixlen 64 scopeid 0x20<link>
ether 02:42:a2:cd:81:e9 txqueuelen 0 (Ethernet)
RX packets 4394 bytes 4695729 (4.6 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2342 bytes 175071 (175.0 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
So I just fixed it by choosing a different IP address for my docker container:
docker run -d --name wildfly1 --ip 172.28.5.10 -h wildfly1 -p 8080:8080 -p 9990:9990 --network=cluster-test-net wildfly-cluster-image
Seems that some other process is already holding the host ports that you are trying to map with the container. You may consider using netstat -aon to find out if there is/are existing processses that are holding ports 81 and 3307 on the docker host.
The port you have given in the docker run command might be assigned to some other process. Please find what is running over there. If something unimportant kill it. Or you can proceed with available ports.
Please find a snapshot below for reference,
Regards
this means another container is taking the container's IP.
Stop all containers and then start your container.
then start your container :
docker stop x
docker network connect --ip 172.24.0.4 yournetwork y
docker start y
docker start x
The order would tell indicate the conflicting containers
or use container network docker inspect network_name
to check whether the containers have the correct Ips
Let me first explain what I'm trying to do, as there may be multiple ways to solve this. I have two containers in docker 1.9.0:
node001 (172.17.0.2) (sudo docker run --net=<<bridge or test>> --name=node001 -h node001 --privileged -t -i -v /sys/fs/cgroup:/sys/fs/cgroup <<image>>)
node002 (172.17.0.3) (,,)
When I launch them with --net=bridge I get the correct value for SSH_CLIENT when I ssh from one to the other:
[root#node001 ~]# ssh root#172.17.0.3
root#172.17.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.17.0.3 56194 22
[root#node001 ~]# ping -c 1 node002
ping: unknown host node002
In docker 1.8.3 I could also use the hostnames I supply when I start them, in 1.8.3 that last ping statement works!
In docker 1.9.0 I don't see anything being added in /etc/hosts, and the ping statement fails. This is a problem for me. So I tried creating a custom network...
docker network create --driver bridge test
When I launch the two containers with --net=test I get a different value for SSH_CLIENT:
[root#node001 ~]# ssh root#172.18.0.3
root#172.18.0.3's password:
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.1 57388 22
[root#node001 ~]# ping -c 1 node002
PING node002 (172.18.0.3) 56(84) bytes of data.
64 bytes from node002 (172.18.0.3): icmp_seq=1 ttl=64 time=0.041 ms
Note that the ip address is not node001's, it seems to represent the docker host itself. The hosts file is correct though, containing:
172.18.0.2 node001
172.18.0.2 node001.test
172.18.0.3 node002
172.18.0.3 node002.test
My current workaround is using docker 1.8.3 with the default bridge network, but I want this to work with future docker versions.
Is there any way I can customize the test network to make it behave similarly to the default bridge network?
Alternatively:
Maybe make the default bridge network write out the /etc/hosts file in docker 1.9.0?
Any help or pointers towards different solutions will be greatly appreciated..
Edit: 21-01-2016
Apparently the problem is fixed in 1.9.1, with bridge in docker 1.8 and with a custom (--net=test) in 1.9.1, now the behaviour is correct:
[root#node001 tmp]# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.5
[root#node002 ~]# env | grep SSH_CLIENT
SSH_CLIENT=172.18.0.3 52162 22
Retried in 1.9.0 to see if I wasn't crazy, and yeah there the problem occurs:
[root#node001 tmp]# ip route
default via 172.18.0.1 dev eth0
172.18.0.0/16 dev eth0 proto kernel scope link src 172.18.0.3
[root#node002 ~]# env|grep SSH_CLI
SSH_CLIENT=172.18.0.1 53734 22
So after remove/stop/start-ing the instances the IP-addresses were not exactly the same, but it can be easily seen that the ssh_client source ip is not correct in the last code block. Thanks #sourcejedi for making me re-check.
Firstly, I don't think it's possible to change any settings on the default network, i.e. to write /etc/hosts. You apparently can't delete the default networks, so you can't recreate them with different options.
Secondly
Docker is careful that its host-wide iptables rules fully expose containers to each other’s raw IP addresses, so connections from one container to another should always appear to be originating from the first container’s own IP address. docs.docker.com
I tried reproducing your issue with the random containers I've been playing with. Running wireshark on the bridge interface for the network, I didn't see my ping packets. From this I conclude my containers are indeed talking directly to each other; the host was not doing routing and NAT.
You need to check the routes on your client container ip route. Do you have a route for 172.18.0.2/16? If you only have a default route, it could try to send everything through the docker host. And it might get confused and do masquerading as if it was talking with the outside world.
This might happen if you're running some network configuration in your privileged container. I don't know what's happening if you're just booting it with bash though.