WIthin a Docker container, I would like to connect to a MySQL database that resides on the local network. However, I get errors because it can not find the host name, so my current hot fix is to hardcode the IP (which is bound to change at some time).
Hence; is it possible to forward a hostname from the host machine to the Docker container at docker run?
Yes, it is possible. Just inject hostname variable when run docker run command:
$ hostname
np-laptop
$ docker run -ti -e HOSTNAME=$(hostname) alpine:3.7
/ # env
HOSTNAME=np-laptop
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
Update:
I think you can do two things with docker run for your particular case:
1. Bind /etc/hosts file from the host to a container.
2. Define any dns server you want inside a container with --dns flag.
So, finally the command is:
docker run -ti -v /etc/hosts:/etc/hosts --dns=<IP_of_DNS> alpine:3.7
Docker containers by default has access to the host network, and they're able to resolve DNS names using DNS servers configured on the host, so it should work out of the box.
I remember having similar problem in my corporate network, I solved it by referencing in the app the remote server with FQDN - our-database.mycompany.com instad just using our-database.
Hope this helps.
People has asked similar questions and got good answers:
How do I pass environment variables to Docker containers?
Alternatively you can configure the DHCP/DNS server that serves the docker machines to resolve the hostnames properly. DDNS is another option that can simplify configuration as well.
Related
I am looking for a way to connect to my host machine in a Docker Container (in my case, access to a specific port for using a proxy in the application container).
I tried network_mode: "host" (or docker run --network="host"), it worked in case of accessing to local machine but caused some other problems which were related to changing network driver to host:
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known.
Also I can not use ifconfig to define network alias since I'm using Ubuntu 18.04.
What should I do?
UPDATE: Since the docker-host (https://github.com/qoomon/docker-host) image is published in the last few months, you can use that without any manual configuration. Easy peasy!
After struggling for a day, finally found the solution. It can be done by --add-host flag in docker run command or extra_hosts in a docker-compose.yml file with making an alias for Local (lo | 127.0.0.1 ) network interface.
So here are the instructions:
First, create an alias for lo interface. As you may know, ifconfig command does not exist on Ubuntu 18.04 so this is how we do it:
sudo ip addr add 192.168.0.20/24 dev lo label lo:1
Then, put this on you docker-compose.yml file:
extra_hosts:
- "otherhost:192.168.0.20"
If you are not using Docker Compose you can add a host to a container by --add-host flag. Something like
docker run container-name --add-host="otherhost:192.168.0.20"
Finally, when you're done with the above steps, restart your containers with docker-compose down && docker-compose up -d or docker-compose restart
Now you can log-in to your container (docker-compose exec container-name bash) and test it.
NOTE: Make sure your working port is open using telnet [interface-ip] [port] command.
You can use the extra_hosts in you docker-compose, which is what you discovered by yourself. I just wanted to add another way when you are working on your local environment.
In docker-for-mac and docker-for-windows, within a container the DNS name host.docker.internal resolves to an IP address allowing network access to the host.
Here's the related description, extracted from the documentation:
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker for Windows.
There's an open issue on github concerning the implementation of this feature for docker-for-linux.
I am creating an Nginx container that I would like to access locally at http://api. Using Docker Machine, I assumed running docker-machine create default and docker-machine ip default to receive the IP and editing my hosts file to something like this:
# docker-machine ip default --> 192.168.99.100
192.168.99.100 api
should map requests to api\ to the Docker Machine IP and serve my content.
Two things are confusing me:
I launch Docker through the Mac App and can create Nginx containers and access content at http://localhost. However, running docker-machine ls returns no machines. This is confusing because I thought Docker had to run on a VM.
Starting from scratch and starting Docker Machine, then spinning up containers seems to have no effect. In other words, I still can access content at http://localhost but not http://api
Instead of accessing my container at http://localhost I want to access it at http://api. How do I do this?
I'm using Docker for Mac 17.12 and Docker Machine 0.14.
On the base of your this question:
Instead of accessing my container at http://localhost I want to access
it at http://api. How do I do this?
Your docker run command:
docker run -it --rm --name test --add-host api:192.168.43.8 -p 80:80 apachehttpd
1st Thing: The --add-host flag add value to /etc/hosts in your container /etc/hosts so http://api will also response inside the container if ping inside that container.
This is how will ping response inside container
2nd Thing: Edit your host etc/hosts file and add
api 192.168.43.8 [your ip]
This is how you can see in Browser.
how is it possible to resolve names defined in Docker host's /etc/hosts in containers?
Containers running in my Docker host can resolve public names (e.g. www.ibm.com) so Docker dns is working fine.
I would like to resolve names from Docker hosts's (e.g. 127.17.0.1 smtp) from containers.
My final goal is to connect to services running in Docker host (e.g. smtp server) from containers. I know I can use the Docker Host IP (127.17.0.1) from containers, but I thought that Docker would have used the Docker host /etc/hosts to build containers's resolve files as well.
I am even quite sure I have seen this working a while ago... but I could be wrong.
Any thoughts?
Giovanni
Check out the --add-host flag for the docker command: https://docs.docker.com/engine/reference/run/#managing-etchosts
$ docker run --add-host="smtp:127.17.0.1" container command
In Docker, /etc/hosts cannot be overwritten or modified at runtime (security feature). You need to use Docker's API, in this case --add-host to modify the file.
For docker-compose, use the extra_hosts option.
For the whole "connect to services running in host" problem, see the discussion in this GitHub issue: https://github.com/docker/docker/issues/1143.
The common approach for this problem is to use --add-host with Docker's gateway address for the host, e.g. --add-host="dockerhost:172.17.42.1". Check the issue above for some scripts that find the correct IP and start your containers.
You can setup on host simple DNS server, and in container setup /etc/resolve.conf to Docker host DNS server.
For example in dnsmasq you can add addn-hosts=/etc/hosts to config file. So container, by using Docker host DNS server, will be able to resolve hosts /etc/hosts.
I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)
I want to make it so that the Docker container I spin up use the same /etc/hosts settings as on the host machine I run from. Is there a way to do this?
I know there is an --add-host option with docker run, but that's not exactly what I want because the host machine's /etc/hosts file may be different on different machines, so it's not great for me to hardcode exact IP addresses/hosts with --add-host.
Use --network=host in the docker run command. This tells Docker to make the container use the host's network stack. You can learn more here.
Add a standard hosts file -
docker run -it ubuntu cat /etc/hosts
Add a mapping for server 'foo' -
docker run -it --add-host foo:10.0.0.3 ubuntu cat /etc/hosts
Add mappings for multiple servers
docker run -it --add-host foo:10.0.0.3 --add-host bar:10.7.3.21 ubuntu cat /etc/hosts
Reference - Docker Now Supports Adding Host Mappings
extra_hosts (in docker-compose.yml)
https://github.com/compose-spec/compose-spec/blob/master/spec.md#extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
Also you can install dnsmasq to the host machine, by the command:
sudo apt-get install dnsmasq
And then you need to add the file /etc/docker/daemon.json with content:
{
"dns": ["host_ip_address", "8.8.8.8"],
}
After that, you need to restart the Docker service by command sudo service docker restart
This option forces to use the host DNS options for every Docker container.
Or you can use it for a single container, and the command-line options are explained by this link. Also docker-compose options are supported (you can read about it by this link).
If you are using docker-compose.yml, the corresponding property is:
services:
xxx:
network_mode: "host"
Source
Add this to your run command:
-v /etc/hosts:/etc/hosts
If trusted users start your containers, you could use a shell function to easily "copy" the /etc/hosts entries that you need:
add_host_opt() { awk "/\\<${1}\\>/ {print \"--add-host $1:\" \$1}" /etc/hosts; }
You can then do:
docker run $(add_host_opt host.name) ubuntu cat /etc/hosts
That way you do not have to hard-code the IP addresses.
The host machine's /etc/hosts file can't mount into a container. But you can mount a folder into the container. And you need a dnsmasq container.
A new folder on host machine
mkdir -p ~/new_hosts/
ln /etc/hosts ~/new_hosts/hosts
mount the ~/new_hosts/ into container
docker run -it -v ~/new_hosts/:/new_hosts centos /bin/bash
Config dnsmasq use /new_hosts/hosts to resolve name.
Change your container's DNS server. Use the dnsmasq container's IP address.
If you change the /etc/hosts file on the host machine, the dnsmasq container's /new_hosts/hosts will change.
I found a problem:
The file in dnsmasq container /new_hosts/hosts can change. But the new hosts can't resolve. Because dnsmasq use inotify listen change event. When you modify a file on the host machine. The dnsmasq can't receive the signal so it doesn't update the configuration. So you may need to write a daemon process to read the /new_hosts/hosts file content to another file every time. And change the dnsmasq configuration to use the new file.
I had the same problem and found that it is likely in contrast with the containerization concept! however I solved my problem by adding each (ip host) pair from /etc/hosts to an existing running container in this way:
docker stop your-container-name
systemctl stop docker
vi /var/lib/docker/containers/*your-container-ID*/hostconfig.json
find ExtraHosts in text and add or replace null with
"ExtraHosts":["your.domain-name.com":"it.s.ip.addr"]
systemctl start docker
docker start your-container-name
if you can stop your container and re-run it, you'd have better situation, so just do that. But if you do not want to destroy your containers, just like mine, it would be a good solution.
If you are running a virtual machine for running Docker containers, if there are hosts (VMs, etc.) you want your containers to be aware of, depending on what VM software you are using, you will have to ensure that there are entries on the host machine (hosting the VM) for whatever machines you want the containers to be able to resolve.
This is because the VM and its containers will have the IP address of the host machine (of the VMs) in their resolv.conf file.
IMO, passing --network=host option while running Docker is a better option as suggested by d3ming over other options as suggested by other answers:
Any change in the host's /etc/hosts file is immediately available to the container, which is what probably you want if you have such a requirement at the first place.
It's probably not a good idea to use the -v option to mount the host's /etc/hosts filr as any unintended change by the container will spoil the host's configuration.