I'm using a docker container to run some browser tests.
For some OAuth workflow, I need a custom hostname that I can forward to the OAuth site, for example my.dev.site.
Usually in non-docker environments, I just add an entry to the /etc/hosts file that casts my.dev.site to 127.0.0.1
Is this possible with docker and if so, how?
By default, docker container hosts are identified by their name.
However, in a compose file, you could use extra_host field to add hostnames to /etc/hosts within containers.
https://docs.docker.com/compose/compose-file/compose-file-v3/#extra_hosts
extra_hosts:
- "my.dev.site:127.0.0.1"
And the docker run version
https://docs.docker.com/engine/reference/run/#network-settings
docker run --add-host my.dev.site:127.0.0.1 <image>
Related
In case of I have a machine that running docker (docker host) and spin up some containers inside this docker host,
I need containers' services be able to talk to each other - container expose ports and they also need to resolve by hostname (e.g: example.com)
container A needs to talk to container B with URL: example.com:3000
I've read this article but not quite sure about "inherit" from docker host, does docker host's /etc/hosts will be appended to container's /etc/hosts that running inside docker host?
https://docs.docker.com/engine/reference/run/#managing-etchosts
How to achieve?
Does this "inherit" has any connect to type of docker container networking https://docs.docker.com/v17.09/engine/userguide/networking/ ?
It does not inherit the host's /etc/hosts file. The file inside your container is updated by docker when using the --add-host parameter or extra_hosts in docker-compose. You can add individual records by using extra_hosts in docker-compose (https://docs.docker.com/compose/compose-file/#extra_hosts).
Although if you're just trying to get 2 containers talking to each other you can alternatively connect them to the same network. In docker-compose you can create what's called an external network and have all your docker-compose files reference it. you will then be able to connect by using the full docker container name (eg. http://project_app_1:3000).
See https://docs.docker.com/compose/compose-file/#external
WIthin a Docker container, I would like to connect to a MySQL database that resides on the local network. However, I get errors because it can not find the host name, so my current hot fix is to hardcode the IP (which is bound to change at some time).
Hence; is it possible to forward a hostname from the host machine to the Docker container at docker run?
Yes, it is possible. Just inject hostname variable when run docker run command:
$ hostname
np-laptop
$ docker run -ti -e HOSTNAME=$(hostname) alpine:3.7
/ # env
HOSTNAME=np-laptop
SHLVL=1
HOME=/root
TERM=xterm
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
Update:
I think you can do two things with docker run for your particular case:
1. Bind /etc/hosts file from the host to a container.
2. Define any dns server you want inside a container with --dns flag.
So, finally the command is:
docker run -ti -v /etc/hosts:/etc/hosts --dns=<IP_of_DNS> alpine:3.7
Docker containers by default has access to the host network, and they're able to resolve DNS names using DNS servers configured on the host, so it should work out of the box.
I remember having similar problem in my corporate network, I solved it by referencing in the app the remote server with FQDN - our-database.mycompany.com instad just using our-database.
Hope this helps.
People has asked similar questions and got good answers:
How do I pass environment variables to Docker containers?
Alternatively you can configure the DHCP/DNS server that serves the docker machines to resolve the hostnames properly. DDNS is another option that can simplify configuration as well.
I am creating an Nginx container that I would like to access locally at http://api. Using Docker Machine, I assumed running docker-machine create default and docker-machine ip default to receive the IP and editing my hosts file to something like this:
# docker-machine ip default --> 192.168.99.100
192.168.99.100 api
should map requests to api\ to the Docker Machine IP and serve my content.
Two things are confusing me:
I launch Docker through the Mac App and can create Nginx containers and access content at http://localhost. However, running docker-machine ls returns no machines. This is confusing because I thought Docker had to run on a VM.
Starting from scratch and starting Docker Machine, then spinning up containers seems to have no effect. In other words, I still can access content at http://localhost but not http://api
Instead of accessing my container at http://localhost I want to access it at http://api. How do I do this?
I'm using Docker for Mac 17.12 and Docker Machine 0.14.
On the base of your this question:
Instead of accessing my container at http://localhost I want to access
it at http://api. How do I do this?
Your docker run command:
docker run -it --rm --name test --add-host api:192.168.43.8 -p 80:80 apachehttpd
1st Thing: The --add-host flag add value to /etc/hosts in your container /etc/hosts so http://api will also response inside the container if ping inside that container.
This is how will ping response inside container
2nd Thing: Edit your host etc/hosts file and add
api 192.168.43.8 [your ip]
This is how you can see in Browser.
how is it possible to resolve names defined in Docker host's /etc/hosts in containers?
Containers running in my Docker host can resolve public names (e.g. www.ibm.com) so Docker dns is working fine.
I would like to resolve names from Docker hosts's (e.g. 127.17.0.1 smtp) from containers.
My final goal is to connect to services running in Docker host (e.g. smtp server) from containers. I know I can use the Docker Host IP (127.17.0.1) from containers, but I thought that Docker would have used the Docker host /etc/hosts to build containers's resolve files as well.
I am even quite sure I have seen this working a while ago... but I could be wrong.
Any thoughts?
Giovanni
Check out the --add-host flag for the docker command: https://docs.docker.com/engine/reference/run/#managing-etchosts
$ docker run --add-host="smtp:127.17.0.1" container command
In Docker, /etc/hosts cannot be overwritten or modified at runtime (security feature). You need to use Docker's API, in this case --add-host to modify the file.
For docker-compose, use the extra_hosts option.
For the whole "connect to services running in host" problem, see the discussion in this GitHub issue: https://github.com/docker/docker/issues/1143.
The common approach for this problem is to use --add-host with Docker's gateway address for the host, e.g. --add-host="dockerhost:172.17.42.1". Check the issue above for some scripts that find the correct IP and start your containers.
You can setup on host simple DNS server, and in container setup /etc/resolve.conf to Docker host DNS server.
For example in dnsmasq you can add addn-hosts=/etc/hosts to config file. So container, by using Docker host DNS server, will be able to resolve hosts /etc/hosts.
I want to make it so that the Docker container I spin up use the same /etc/hosts settings as on the host machine I run from. Is there a way to do this?
I know there is an --add-host option with docker run, but that's not exactly what I want because the host machine's /etc/hosts file may be different on different machines, so it's not great for me to hardcode exact IP addresses/hosts with --add-host.
Use --network=host in the docker run command. This tells Docker to make the container use the host's network stack. You can learn more here.
Add a standard hosts file -
docker run -it ubuntu cat /etc/hosts
Add a mapping for server 'foo' -
docker run -it --add-host foo:10.0.0.3 ubuntu cat /etc/hosts
Add mappings for multiple servers
docker run -it --add-host foo:10.0.0.3 --add-host bar:10.7.3.21 ubuntu cat /etc/hosts
Reference - Docker Now Supports Adding Host Mappings
extra_hosts (in docker-compose.yml)
https://github.com/compose-spec/compose-spec/blob/master/spec.md#extra_hosts
Add hostname mappings. Use the same values as the docker client --add-host parameter.
extra_hosts:
- "somehost:162.242.195.82"
- "otherhost:50.31.209.229"
Also you can install dnsmasq to the host machine, by the command:
sudo apt-get install dnsmasq
And then you need to add the file /etc/docker/daemon.json with content:
{
"dns": ["host_ip_address", "8.8.8.8"],
}
After that, you need to restart the Docker service by command sudo service docker restart
This option forces to use the host DNS options for every Docker container.
Or you can use it for a single container, and the command-line options are explained by this link. Also docker-compose options are supported (you can read about it by this link).
If you are using docker-compose.yml, the corresponding property is:
services:
xxx:
network_mode: "host"
Source
Add this to your run command:
-v /etc/hosts:/etc/hosts
If trusted users start your containers, you could use a shell function to easily "copy" the /etc/hosts entries that you need:
add_host_opt() { awk "/\\<${1}\\>/ {print \"--add-host $1:\" \$1}" /etc/hosts; }
You can then do:
docker run $(add_host_opt host.name) ubuntu cat /etc/hosts
That way you do not have to hard-code the IP addresses.
The host machine's /etc/hosts file can't mount into a container. But you can mount a folder into the container. And you need a dnsmasq container.
A new folder on host machine
mkdir -p ~/new_hosts/
ln /etc/hosts ~/new_hosts/hosts
mount the ~/new_hosts/ into container
docker run -it -v ~/new_hosts/:/new_hosts centos /bin/bash
Config dnsmasq use /new_hosts/hosts to resolve name.
Change your container's DNS server. Use the dnsmasq container's IP address.
If you change the /etc/hosts file on the host machine, the dnsmasq container's /new_hosts/hosts will change.
I found a problem:
The file in dnsmasq container /new_hosts/hosts can change. But the new hosts can't resolve. Because dnsmasq use inotify listen change event. When you modify a file on the host machine. The dnsmasq can't receive the signal so it doesn't update the configuration. So you may need to write a daemon process to read the /new_hosts/hosts file content to another file every time. And change the dnsmasq configuration to use the new file.
I had the same problem and found that it is likely in contrast with the containerization concept! however I solved my problem by adding each (ip host) pair from /etc/hosts to an existing running container in this way:
docker stop your-container-name
systemctl stop docker
vi /var/lib/docker/containers/*your-container-ID*/hostconfig.json
find ExtraHosts in text and add or replace null with
"ExtraHosts":["your.domain-name.com":"it.s.ip.addr"]
systemctl start docker
docker start your-container-name
if you can stop your container and re-run it, you'd have better situation, so just do that. But if you do not want to destroy your containers, just like mine, it would be a good solution.
If you are running a virtual machine for running Docker containers, if there are hosts (VMs, etc.) you want your containers to be aware of, depending on what VM software you are using, you will have to ensure that there are entries on the host machine (hosting the VM) for whatever machines you want the containers to be able to resolve.
This is because the VM and its containers will have the IP address of the host machine (of the VMs) in their resolv.conf file.
IMO, passing --network=host option while running Docker is a better option as suggested by d3ming over other options as suggested by other answers:
Any change in the host's /etc/hosts file is immediately available to the container, which is what probably you want if you have such a requirement at the first place.
It's probably not a good idea to use the -v option to mount the host's /etc/hosts filr as any unintended change by the container will spoil the host's configuration.