Access host machine in Docker Container on Ubuntu 18.04 - docker

I am looking for a way to connect to my host machine in a Docker Container (in my case, access to a specific port for using a proxy in the application container).
I tried network_mode: "host" (or docker run --network="host"), it worked in case of accessing to local machine but caused some other problems which were related to changing network driver to host:
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known.
Also I can not use ifconfig to define network alias since I'm using Ubuntu 18.04.
What should I do?

UPDATE: Since the docker-host (https://github.com/qoomon/docker-host) image is published in the last few months, you can use that without any manual configuration. Easy peasy!
After struggling for a day, finally found the solution. It can be done by --add-host flag in docker run command or extra_hosts in a docker-compose.yml file with making an alias for Local (lo | 127.0.0.1 ) network interface.
So here are the instructions:
First, create an alias for lo interface. As you may know, ifconfig command does not exist on Ubuntu 18.04 so this is how we do it:
sudo ip addr add 192.168.0.20/24 dev lo label lo:1
Then, put this on you docker-compose.yml file:
extra_hosts:
- "otherhost:192.168.0.20"
If you are not using Docker Compose you can add a host to a container by --add-host flag. Something like
docker run container-name --add-host="otherhost:192.168.0.20"
Finally, when you're done with the above steps, restart your containers with docker-compose down && docker-compose up -d or docker-compose restart
Now you can log-in to your container (docker-compose exec container-name bash) and test it.
NOTE: Make sure your working port is open using telnet [interface-ip] [port] command.

You can use the extra_hosts in you docker-compose, which is what you discovered by yourself. I just wanted to add another way when you are working on your local environment.
In docker-for-mac and docker-for-windows, within a container the DNS name host.docker.internal resolves to an IP address allowing network access to the host.
Here's the related description, extracted from the documentation:
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker for Windows.
There's an open issue on github concerning the implementation of this feature for docker-for-linux.

Related

Connect to docker container on Windows

I've read this post and I've tried adding ports: "7080:7080" in docker-compose.yml but still can't connect to the container using 172.18.0.2:7080 (btw I'm a docker newbie)
The container is one of several in a DockStation project on Windows 10. The image I'm using is for OpenLiteSpeed with WordPress.
The docker-compose.yml file contents is below:
version: '2'
services:
gnome-3-28-1804:
image: ubuntudesktop/gnome-3-28-1804
firefox:
image: jlesage/firefox
browser-box:
image: jim3ma/browser-box
openlitespeed:
image: litespeedtech/openlitespeed
ports:
- "7080:7080"
Any ideas please?
UPDATE: IP 172.17.0.1 appears to be the default bridge gateway IP so I assume 172.18.0.2 for this container is in some way related to that; Docker and DockStation are both running locally on host 10.0.0.10 Not sure if the setup should even be using a bridge. http://localhost:7080/ says ERR_CONNECTION_REFUSED
UPDATE 2: I'm using Docker for Windows (Docker Desktop). Tried turning off the Windows firewall but makes no difference. Still getting ERR_CONNECTION_REFUSED for http://localhost:7080/ and http://10.0.0.10:7080/. There are 3 other containers in the project but not running, only the LiteSpeed one is running.
UPDATE 3: I created a new project and installed tutum/hello-world/ then ran the new container. The hello-world container is running and I've not found any error in the logs, but neither localhost nor 10.0.0.10 will connect, the error in Chrome is ERR_CONNECTION_REFUSED. Same if I run docker run -d -p 80 tutum/hello-world in Windows command prompt.
What is this IP (172.18.0.2) representing? Is it a remote machine where DockStation is running?
If this is a case, check if this port is publicly available on that machine. You did add ports section to the Dockerfile which will map container's port to machine's port - but it is a matter whether e.g. firewall blocks outside access to that port.
I would first troubleshoot it by trying to access localhost:7080 from 172.18.0.2 machine - if it works, your Docker configuration is good and you need to look for the problem in that machine's configuration (e.g. firewall).
I tried your Compose file on my system and it works as expected - I can access port 7080 both using my host's system IP and hostname and the container's IP and ports 80 and 443 using only the container's IP (since they're not mapped to any of the host's ports).
You did not specify whether you're using Docker for Windows or Docker Toolbox - DockStation works with both, but if you're using Docker Toolbox, then you'll have to use the virtual machine's IP or hostname to access port 7080, instead of localhost. If you're using Docker for Windows, then I do not understand what is going on - are you sure the containers are running?
As for where those IP's you mentioned come from - 172.17.0.1 is most likely your hosts IP on Docker's default bridged network. Docker-compose, by default, creates its own bridged networks for every project. In your case, in your project's network, your host's IP would be 172.18.0.1. You can view Docker's networks with command docker network ls and their details with docker network inspect <network-name>.
You should not use any of those IP's for any reason, since there's no guarantee they'll remain the same. If you need to connect from outside, map internal container ports to your Docker's host's ports, like you did with port 7080 and if you need containers to connect to each other - with docker-compose you can use service names as hostnames, without it you have to connect them to the same, non-default, bridged Docker network and use their container names as hostnames.
This solution worked for me.
docker run -d -p 127.0.0.1:80:80 tutum/hello-world
Apparently you have to specify you want the port exposed under localhost. Then localhost entered in the browser address bar loaded the Hello World page - hurrah!
Once I changed the ports in docker-compose.yml to '127.0.0.1:80:80' then it also worked when run from DockStation.

How to resolve docker host names (/etc/hosts) in containers

how is it possible to resolve names defined in Docker host's /etc/hosts in containers?
Containers running in my Docker host can resolve public names (e.g. www.ibm.com) so Docker dns is working fine.
I would like to resolve names from Docker hosts's (e.g. 127.17.0.1 smtp) from containers.
My final goal is to connect to services running in Docker host (e.g. smtp server) from containers. I know I can use the Docker Host IP (127.17.0.1) from containers, but I thought that Docker would have used the Docker host /etc/hosts to build containers's resolve files as well.
I am even quite sure I have seen this working a while ago... but I could be wrong.
Any thoughts?
Giovanni
Check out the --add-host flag for the docker command: https://docs.docker.com/engine/reference/run/#managing-etchosts
$ docker run --add-host="smtp:127.17.0.1" container command
In Docker, /etc/hosts cannot be overwritten or modified at runtime (security feature). You need to use Docker's API, in this case --add-host to modify the file.
For docker-compose, use the extra_hosts option.
For the whole "connect to services running in host" problem, see the discussion in this GitHub issue: https://github.com/docker/docker/issues/1143.
The common approach for this problem is to use --add-host with Docker's gateway address for the host, e.g. --add-host="dockerhost:172.17.42.1". Check the issue above for some scripts that find the correct IP and start your containers.
You can setup on host simple DNS server, and in container setup /etc/resolve.conf to Docker host DNS server.
For example in dnsmasq you can add addn-hosts=/etc/hosts to config file. So container, by using Docker host DNS server, will be able to resolve hosts /etc/hosts.

Why can't I curl one docker container from another via the host

I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)

Docker: how to open ports to the host machine?

What could be the reason for Docker containers not being able to connect via ports to the host system?
Specifically, I'm trying to connect to a MySQL server that is running on the Docker host machine (172.17.0.1 on the Docker bridge). However, for some reason port 3306 is always closed.
The steps to reproduce are pretty simple:
Configure MySQL (or any service) to listen on 0.0.0.0 (bind-address=0.0.0.0 in ~/.my.cnf)
run
$ docker run -it alpine sh
# apk add --update nmap
# nmap -p 3306 172.17.0.1
That's it. No matter what I do it will always show
PORT STATE SERVICE
3306/tcp closed mysql
I've tried the same with an ubuntu image, a Windows host machine, and other ports as well.
I'd like to avoid --net=host if possible, simply to make proper use of containerization.
It turns out the IPs weren't correct. There was nothing blocking the ports and the services were running fine too. ping and nmap showed the IP as online but for some reason it wasn't the host system.
Lesson learned: don't rely on route in the container to return the correct host address. Instead check ifconfig or ipconfig on the Linux or Windows host respectively and pass this IP via environment variables.
Right now I'm transitioning to using docker-compose and have put all required services into containers, so the host system doesn't need to get involved and I can simply rely on Docker's DNS. This is much more satisfying.

How do docker containers resolve hostname of other docker containers running on the same machine?

I have started to use docker and liking it mostly because Docker containers are kind of light-weight VMs. But I am unable to figure out, how docker containers may be able resolve each-other's hostnames. They can connect to each other using there IPs, but not using their hostnames, I cannot even edit /etc/hosts in the containers to make up for that somehow. When I restart the containers, they get different IPs and hence I want to use the hostnames in place of IPs to communicate with each other. Let us say, I want to run Zookeeper instances of a Zookeeper cluster in the containers and I want to put the hostnames of the Zookeeper servers in the config (zoo.cfg) files.
As of Docker 1.10, if you create a distinct docker network, Docker will resolve hostnames intra-container-wise using an internal DNS server [1][2][3].
You can change the network hostname by specifying one with --name within the docker run. Otherwise the hostname will refer to the container id (12 char long hash, shown by docker container ls ).
See also:
Docker doesn't resolve hostname
When to use --hostname in docker?
Sources:
[1] = docker docs - Embedded DNS server in user-defined networks
[2] = Docker Engine release notes
- 1.10.0 (2016-02-04) - Networking
[3] = Docker pull requests - Vendoring libnetwork
It may be worth checking out Docker links (https://docs.docker.com/userguide/dockerlinks/). When you link to a running container, a host entry is added for the container you wish to connect to.
In their example they show
$ sudo docker run -t -i --rm --link db:db training/webapp /bin/bash
root#aed84ee21bde:/opt/webapp# cat /etc/hosts
172.17.0.7 aed84ee21bde
. . .
172.17.0.5 db
As you see here, they link the application they're in bash with to the container named db, and subsequently a host entry is added for db with the IP address of that container.
So in the instance of having zookeeper running, you could simply make the containers you start just link to zookeeper. I hope this helps!
Can depend on OS of container, but supposing that container runs Linux
you can check your DNS configuration this way:
cat /etc/resolv.conf
It can return something like:
nameserver 127.0.0.11
options ndots:0
/etc/resolv.conf is standard configuration file for DNS in UNIX-like OS-es.
In this particular case container is configured to use 127.0.0.11 as DNS server.
So container can query it to determine IP address of another container using it's host name.
You can check whether that host actually works by using nslookup command, e.g.:
nslookup redis 127.0.0.11
, which will contact DNS server 127.0.0.11 and ask to resolve host name "redis".
It can return something like:
Server: 127.0.0.11
Address 1: 127.0.0.11
Name: redis
Address 1: 172.21.0.3 counter-app_redis_1.counter-app_counter-net
, which would mean that host name resolved to ip 172.21.0.3.
In this specific case nameserver entry was added by using the following entry in the docker-compose.yml configuration file:
...
networks:
counter-net:
This root entry configured common bridge network shared by several docker containers.

Resources