Assign hostnames to exposed docker ports - docker

Okay so in Vagrant/VVV you can assign different hostnames to your different projects so when you go to http://myproject-1.dev your website shows up.
This is very convenient if you are working on dozens of projects at the same time, As far as I know such thing is not possible in docker (it can't touch hosts file), My question is, is there something similar we can do in Docker? Some automated tool maybe?
Using docker for windows.

Hostnames can map many containers together. In docker compose, there's a hostname option. But that's only within the Docker network bridge, not available to the host
Docker isn't a VM (although it runs within one in Windows).
You can edit your hosts file to have the HyperVisor available, but you're supposed to have the host ports forwarded into the container.
Use localhost, not any hostname.
If you prefer your Vagrant patterns, continue using it, but provision Docker containers from it, or use Docker Machine

Related

Figuring out the IP address of a service for dockerized Consul

I am building a microservices based application and would like to use Consul as service registry. All in all I have three scenarios:
All the services run on the host.
All the services run on the host, but Consul runs in Docker.
All the services and Consul run in Docker.
Now I have the problem of how to register the services with their IP address, because I need to figure out their IP address so that it is reachable by Consul (e.g., for the health checks):
If everything runs on the same host, it's pretty easy: Simply use 127.0.0.1, and you're done.
If everything (including Consul) runs in Docker, I could use hostname -i from within the Docker containers to figure out their external IP and hand it over to Consul. This works, but I wonder if there is a better way to solve this? (Ideally, the solution should also work in the same way on Kubernetes.)
If the services run on the host, but Consul runs in Docker, right now I am missing any idea at all. Basically, Consul requires the host's IP address to be able to talk to the services, but I can only detect this from within the Consul container (by resolving host.docker.internal). But first, this does not work from externally, and second it only works for Docker for Mac / Windows, not e.g. with Kubernetes.
How could I solve these issues?
PS: I would like to avoid using a container such as registrator by Gliderlabs, since I have doubts how well this works on Kubernetes, and also it won't help with the mixed Docker / host scenario.
If you're using Kubernetes, you might start by checking whether its built-in service registry meets your needs. There's generally not a direct path to reach a pod via its node's host's IP address, so the setup you describe won't really work well. (I might consider Consul for a key/value store but I wouldn't reach for it as a service registry in Kubernetes land.)
In plain multi-host Docker land, this is one of the few situations I've found where host networking is appropriate. Start Consul with --net host or an equivalent option in Docker Compose or another orchestration tool. Then Consul will believe "its" IP address is the host's, and if you have automated TCP probes of well-known ports, you can search every service that's running on the host and discover e.g. a MySQL service on port 3306, whether running in a container or natively on the host.
With this setup, servicename.service.consul will resolve to some physical-host IP address. If you have a Docker container pointing at its current host for DNS service, then that will route a service to some host, maybe the same one, but this has worked reliably for me in the past.
Note that the relevant hostnames will be different in different environments: servicename.service.consul for a Consul-based setup, servicename.namespacename.svc.cluster.local in Kubernetes, maybe localhost in a developer-desktop environment. You need to make sure this is configurable, most straightforwardly via an environment variable.

How to query Docker DNS from within a container?

I'm using docker-compose.yml version 2. One of my containers doesn't see another one and I'm trying to debug this.
Previously, docker-compose relied on links mechanism for containers to interact with each other. It was implemented via dynamic modification of /etc/hosts file.
Starting from docker-compose version 2, docker-compose relies upon Docker DNS mechanism. Now /etc/hosts is kept intact and containers query their internal Docker DNS to find each other.
I wonder, how do I query Docker DNS from within a container for debugging purposes? I want to make sure that it properly resolves the hostname of other containers in its network and that they are known by their aliases.

Easy, straightforward, robust way to make host port available to Docker container?

It is really easy to mount directories into a docker container. How can I just as easily "mount a port into" a docker container?
Example:
I have a MySQL server running on my local machine. To connect to it from a docker container I can mount the mysql.sock socket file into the container. But let's say for some reason (like intending to run a MySQL slave instance) I cannot use mysql.sock to connect and need to use TCP.
How can I accomplish this most easily?
Things to consider:
I may be running Docker natively if I'm using Linux, but I may also be running it in a VM if I'm on Mac or Windows, through Docker Machine or Docker for Mac/Windows (Beta). The answer should handle both scenarios seamlessly, without me as the user having to decide which solution is right depending on my specific Docker setup.
Simply assigning the container to the host network is often not an option, so that's unfortunately not a proper solution.
Potential solution directions:
1) I understand that setting up proper local DNS and making the Docker container (network) talk to it might be a proper, robust solution. If there is such a DNS service that can be set up with 1, max 2 commands and then "just work", that might be something.
2) Essentially what's needed here is that something will listen on a port inside the container and like a sort of proxy route traffic between the TCP/IP participants. There's been discussion on this closed Docker GH issue that shows some ip route command-line magic, but that's a bit too much of a requirement for many people, myself included. But if there was something akin to this that was fully automated while understanding Docker and, again, possible to get up and running with 1-2 commands, that'd be an acceptable solution.
I think you can run your container with --net=host option. In this case container will bind to the host's network and will be able to access all the ports on your local machine.

Docker linked containers, Docker Networks, Compose Networks - how should we now 'link' containers

I have an existing app that comprises of 4 docker containers running on the same host. They have been linked together using the link command.
However, after some upgrades of docker, the link behaviour has been deprecated, and changed it seems. We are having issues where containers are loosing the link to each other now.
So, docker says to use the new Network feature over linked containers. But I can't see how this works.
If 2 containers are in the same network, are the same ENV vars automatically exposed on the containers as if they were linked?
Or is the hosts file updated with the correct container name / ip addresses ? Even after a docker restart ?
I can't see in the docs how a container can find the location of another in its network?
Also, compose looks to have a simple set up for linking containers, and may automate some of this - would compose be the way to go for defining multi container apps? Or is it too soon to run it in production?
Does compose support multiple host configuration as well?
at some point in the future we will probably need to move one of the containers to a different host....
If 2 containers are in the same network, are the same ENV vars automatically exposed on the containers as if they were linked?
no, you would now have to use the container names as their hostnames. The new network feature has no idea which ports will be used. Think of this as 2 computers plugged on the same network hub. Both can address the other one by its hostname.
is the hosts file updated with the correct container name / ip addresses ? Even after a docker restart ?
yes, /etc/hosts files for all containers which are part of a network will be updated live by the docker engine.
I can't see in the docs how a container can find the location of another in its network?
Using the container name. See the Connect containers section of the Work with network commands doc:
Once connected, the containers can communicate using another container’s IP address or name.
Also, compose looks to have a simple set up for linking containers, and may automate some of this - would compose be the way to go for defining multi container apps? Or is it too soon to run it in production?
Compose supports the new network feature as beta by offering the --x-networking option. You should not use it in production yet (current Compose version is 1.5).
Furthermore, the current implementation is a bit inconvenient as we must use the full container name which is composed of the project name + _ + container name + _1. The documentation says the next version (current one is 1.5) will improve this so that we should not have to worry about the project name to address containers.
Does compose support multiple host configuration as well?
Yes, in conjonction with Swarm as detailed in the overlay network documentation

Docker host information and cluster

I am setting up a simple cluster using docker on several hosts. Before using docker the processes were simply started with a argument giving the address to a config server. The first thing each process does is to connect to the config server, get the addresses (host and port) of all the other services as well as register itself with host (and several different ports, one for each the services it provides).
However, it does not seem to be possible to dockerize this workflow? Since a process in a container seems not to be able to get the address and ports on the host (based on for example How to get the IP address of the docker host from inside a docker container) it does not know what to register itself as. Is this really not possible?
If not, are there any alternative ways this sort of setup is intended to be run using docker?

Resources