I want to set a hosts-file like entry for a specific domain name FROM MY APPLICATION INSIDE a docker container. As in, I want the app in the container to resolve x.com:192.168.1.3, automatically, without any outside input or configuration. I realize this is unconventional compared to canonical docker use-cases, so don't # me about it :)
I want my code to, on a certain branch, use a different hostname:ip mapping for a specific domain. And I want it to do it automatically and without intervention from the host machine, docker daemon, or end-user executing the container. Ideally this mapping would occur at the container infrastructure level, rather than some kind of modification to the application code which would perform this mapping itself.
How should I be doing this?
Why is this hard?
The /etc/hosts file in a docker container is read-only and is not able to be modified from inside the container. This is by design and it's managed by the docker daemon.
DNS for a docker container is linked to the DNS of the underlying host in a number of ways and it's not smart to mess with it too too much.
Requirements:
Inside the container, domain x.com resolves to non-standard, pre-configured IP address.
The container is a standard Docker container running on a host of indeterminate configuration.
Constraints:
I can't pass the configuration as a runtime flag (e.g. --add-host).
I can't expose the mapping externally (e.g. set a DNS entry on the host machine).
I can't modifying the underlying host, or count on it to be configured a certain way.
open questions:
is it possible to set DNS entries from inside the container and override host DNS for the container only? if so, what's a good light-weight low-management tool for this (e.g. DNSmasq, coredns)?
is there some magic by which I can impersonate the /etc/hosts file or add a pre-processed file before it in the resolution chain?
Related
I want to use docker for its network isolation, but that's all.
More specifically, I want to run two programs and only allow network access a certain port on the one program if the connection is relayed through the second program. The one program is a VNC server and the second program is a Websocket relay with a custom authentication scheme.
So, I'm thinking about putting them both in a container and using docker port mappings to control their network access.
Can I setup docker so that I use the host's file system directly? I'd like to do things like access an .Xauthority file and create UNIX domain sockets (the VNC server does this). I know that I could mount the host filesystem in the container, but it'd be simpler to just use it directly as the container's filesystem. I think.
Is this possible? Easy?
No, every container is based on an image that packages the filesystem layers. The filesystem namespace cannot be disabled in docker (unlike the network, pid, and other namespaces you can set to "host").
For your requirements, if you do not want to use host volume mounts, and do not want to package the application in an image, then you would be better off learning network namespaces in the Linux kernel which docker uses to implement container isolation. The ip netns command is a good place to start.
I would like to read host's ifconfig output during the run of the Docker container, to be able to parse it and get OpenVPN interface (tap0) IP address and process it within my application.
Unfortunately, propagating this value via the environment is not my case, because IP address could change in time of running the container and I don't want to restart my application container each time to see a new value.
Current working solution is a CRON on the host which writes the IP into the file on a shared volume and container reads from it - but I am looking for better solution as it seems to me as a workaround. Also, there was a plan to create new container with network: host which will see host's interfaces - it works, but it also looks like a workaround as it involves many steps and probably security issues.
I have a question, is there any valid and more clean way to achieve my goal - read host's ifconfig in docker container in realtime?
A specific design goal of Docker is that containers can’t directly access the host’s network configuration. The workarounds you’ve identified are pretty much the only way to do these.
If you’re trying to modify the host’s network configuration in some way (you’re trying to actually run a VPN, for example) you’re probably better off running it outside of Docker. You’ll still need root permission either way, but you won’t need to disable a bunch of standard restrictions to do what you need.
If you’re trying to provide some address where the service can be reached, using configuration like an environment variable is required. Even if you could access the host’s configuration, this might not be the address you need: consider a cloud environment where you’re running on a cloud instance behind a load balancer, and external clients need the load balancer; that’s not something you can directly know given only the host’s network configuration.
I've had db and server container, both running in the same network. Can ping db host by its container id with no problem.
When I set a hostname for db container manually (-h myname), it had an effect ($ hostname returns set host), but I can't ping that hostname from another container in the same network. Container id still pingable.
Although it works with no problem in docker compose.
What am I missing?
Hostname is not used by docker's built in DNS service. It's a counterintuitive exception, but since hostnames can change outside of docker's control, it makes some sense. Docker's DNS will resolve:
the container id
container name
any network aliases you define for the container on that network
The easiest of these options is the last one which is automatically configured when running containers with a compose file. The service name itself is a network alias. This lets you scale and perform rolling updates without reconfiguring other containers.
You need to be on a user created network, not something like the default bridge which has DNS disabled. This is done by default when running containers with a compose file.
Avoid using links since they are deprecated. And I'd only recommend adding host entries for external static hosts that are not in any DNS, for container to container, or access to other hosts outside of docker, DNS is preferred.
I've found out, that problem can be solved without network using --add-host option. Container's IP can be gain using inspect command.
But when containers in the same network, they are able to access each other via it names.
As stated in the docker docs, if you start containers on the default bridge network, adding -h myname will add this information to
/etc/hosts
/etc/resolv.conf
and the bash prompt
of the container just started.
However, this will not have any effect to other independent containers. (You could use --link to add this information to /etc/hosts of other containers. However, --link is deprecated.)
On the other hand, when you create a user-defined bridge network, docker provides an embedded DNS server to make name lookups between containers on that network possible, see Embedded DNS server in user-defined networks. Name resolution takes the container names defined with --name. (You
will not find another container by using its --hostname value.)
The reason, why it works with docker-compose is, that docker-compose creates a custom network for you and automatically names the containers.
The situation seems to be a bit different, when you don't specify a name for the container yourself. The run reference says
If you do not assign a container name with the --name option, then the daemon generates a random string name for you. [...] If you specify a name, you can use it when referencing the container within a Docker network.
In agreement with your findings, this should be read as: If you don't specify a custom --name, you cannot use the auto-generated name to look up other containers on the same network.
I have a container running on a docker-host, the docker-host has a host entry like:
externalServiceHostEntry <external_service_IP>
I want the service inside my container to be able to talk to the externalService, and I want that service to use an alias hostname (because the container may be deployed in different docker-hosts with different host entries). As such i'd like to do something like this when I run the container:
docker run --add-host <alias>:<external_service_IP> ...
However, I can't do this as the external_service_IP may be changed by our infrastructure team at any point, and they only want to maintain the docker-host's etc/hosts file. So i want to do something like:
docker run --add-host <alias>:externalServiceHostEntry ...
Though i know this won't work for the obvious reason that the docker-host's externalServiceHostEntry hostname will have no meaning in the container /etc/hosts.
How can I achieve a setup, so that our infrastructure team can change the docker-hosts' etc/hosts file at will, and yet the service running in my container will still be able to communicate with the external service via a constant alias?
I am looking for a way to assign a domain name to the container when it is started. For example, I want to start a web server container, and to be able to access web pages via domain name. Is there an easy way to do this ?
For all I know, Docker doesn't provide this feature out of the box. But surely there are several workarounds here. In fact you need to deploy a DNS on your host that will distinguish the containers and resolve their domain names in dynamical IPs. So you could give a try to:
Deploy some of Docker-aware DNS solutions (I suggest you to use SkyDNSv1/SkyDock);
Configure your host to work with this DNS (by default SkyDNS makes the containers know each other by name, but the host is not aware of it);
Run your containers with explicit --hostname (you will probably use scheme container_name.image_name.dev.skydns.local).
You can skip step #2 and run your browser inside container too: it will discover the web application container by hostname.