Add docker containers to hosts automatically by name - docker

A common workflow for me is I run docker-compose up during development in a web-project, I run docker inspect repo_app_1 | grep IPAddress and then go to the ipaddress in the browser.
Instead of fetching container's IP, I want to add the name of this container with its IP to the hosts file.
What would be the best way to do that? It's certainly possible, I can think of one way -- hijack the docker and docker-compose commands so that after each execution we run a script which runs docker container's output through awk and appends it to hosts file and also manages to delete the older entries.

One possibility is to use Traefik, a Docker-aware reverse proxy that includes its own monitoring dashboard.
See for instance "Traefik on Docker for Web Developers - With bonus Let's Encrypt SSL!", from Juan Treminio, in order to register automatically your containers and access them through a pre-defined URL.
Juan describes how to solve the "port dance":
If port 80 is mapped to web-server-A you must choose another port to bind for web-server-B and web-server-C.
This can quickly get old because you must remember that http://localhost goes to A, http://localhost:81 goes toB and http://localhost:82 goes to C.
He points out:
On virtual machines this problem does not really occur because you can assign a static IP address to your servers, and bind it to your system’s hosts file (/etc/hosts).
Containers are ephemeral by nature and do not normally get created on your host’s network but rather private networks with their own random IP addresses within special ranges. However, you must edit /etc/hosts for every VM you spin up and the list grows with the number of projects you handle.
Træfik solves both of these problems, first by removing the need to use ports in URLs and second by not needing you to edit /etc/hosts at all.
A new container will register itself to the Traefik docker network (docker network create --driver bridge traefik_webgateway) with:
docker run -d --name some-mailhog \
--network traefik_webgateway \
--label traefik.docker.network=traefik_webgateway \
--label traefik.frontend.rule=Host:mailhog.localhost \
--label traefik.port=8025 \
mailhog/mailhog
The URL becomes simple http://mailhog.localhost.

Related

Multiple Docker host machine communication

Suppose, I want to connect a container with another container, where both docker containers are running on a different machine. How do I do that? Hopefully, the attached picture will help to understand what I need. thanks.
This works exactly the same way as if neither process was running in Docker: connect to the other system's IP address and the port you published when you launched the container.
machine02$ docker run --name m2-c1 -p 12345:80 image1
machine01$ docker run --name m1-c5 \
> -e CONTAINER_1_URL=http://192.168.1.102:12345 \
> image5
If you find yourself doing this often, a clustered setup like Kubernetes or Docker Swarm is built for this sort of environment. They have a piece called an overlay network that would allow all 10 containers to share a single "network", so you can directly call c1 as a host name and reach either copy of it. A non-Docker service discovery system, like Hashicorp's Consul, can also help remember what service is running on which node.

Is there any way to flush the docker DNS cache (internal)?

I'm using Docker 18.03.1-ce and if I create a container, remove it and then re-create it, the internal DNS retains the old address (in addition to the new).
Is there any way to clear or flush the old entries? If I delete and re-create the network then that flushes it but I don't want to have to do that every time.
I create the network:
docker network create -d overlay --attachable --subnet 10.0.0.0/24 --gateway 10.0.0.1 --scope swarm -o parent=ens224 overlay1
Then create a container (SQL for this example)
docker container run -d --rm --network overlay1 --name sql -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Some_SA_Passw0rd' -p 1433:1433 microsoft/mssql-server-linux
If I create an Alpine container on the same network I can nslookup sql by name and it resolves to 10.0.0.6. No problems, so far-so-good.
Now, if I remove the SQL container and re-create it then nslookup sql shows 10.0.0.6 and 10.0.0.8. The 10.0.0.6 is the old address and no longer alive but still resolves.
The nameserver my containers are using is 127.0.0.11 which is typical for a user-created network but I haven't been able to find anything that will let me clear its cache.
Maybe I'm missing something but I had assumed the DNS entries would be torn down whenever the containers get removed.
Any insight is certainly appreciated!
I have just fixed the same problem by running containers in Docker Swarm. Seems like Swarm does something to keep DNS entries up to date. I tried to remove my application container manually using docker rm, scaled it up/down - in every case it's hostname was correctly resolved to existing IP addresses only.
If you can't use Swarm, I guess another solution would be to run a standalone service discovery tool (maybe in another container) and configure your other containers to use it as DNS server instead of a build-in one.
Like Daniele, I also have the DNS problem in Swarm mode (stack). I kill the services running on 1 node (paying attention that other instances are running on other nodes), and swarm starts recreating them. But meanwhile, the DNS gives me the wrong IP for the service name. More than that, I would expect that the DNS resolution gives me a different IP everytime but it's not the case, in time frame (a few seconds), it DNS returns the same IP for a given service, regardless the IP is valid or not.
Daniele, did you fill a bug report ?

How to share host network bridge when using docker in docker

I'm using the https://github.com/jpetazzo/dind docker image to have docker in docker. When starting docker containers inside the parent docker, is it possible to use the bridge of the parent docker so I can share the network between the containers inside the docker container and the parent docker container?
What I want to do is to access the containers inside the parent docker container from the host directly by IP to assign domain names to them.
UPDATE -> Main Idea
I'm upgrading a free online Java compiler to allow users to run any program using docker. So I'm using the dind (docker in docker image) to launch a main container that have inside a Java program that receive requests and launch docker containers inside of it.
So what I want to do is to give the users the option to run programs that expose a port and let them access their containers using a subdomain.
So graphically I have this hierarchy
Internet -> My Host -> Main Docker Container -> User Docker Container 1
-> User Docker Container 2
-> User Docker Container n
And what I want to do is to give the user a subdomain name to access his "User Docker Container" for example: www.user_25.compiler1.browxy.com
So he can have a program that expose a port in his "User Docker Container" and can access it using the subdomain www.user_25.compiler1.browxy.com
What confuses me is that to access the "User Docker Container" I need to access before the Main Docker Container. I'm trying to find a way to access the "User Docker Container" directly, so I thought that if the User Docker Container and the Main Docker container can share the same network I can access the User Docker Container directly from the host and assign a domain name to the "User Docker Container" IP updating the /etc/hosts file on the host.
Thanks a lot for any advice or suggestion :)
Finally I took many ideas that larsks gave me and this is what I did
Start docker in docker container with a name (--name compiler)
Execute this command in the host -> sudo route add -net 10.0.0.0 gw docker inspect --format '{{ .NetworkSettings.IPAddress }}' compiler netmask 255.255.255.0
For this to work I added a custom bridge in the docker in docker container that ensure that the ip range is 10.0.0.0/24
Now I can ping containers created inside the docker in docker container from the host
To have name resolution I installed docker-dns as larsks suggested into the docker in docker container and added the IP of it to /etc/resolv.conf in the host
The result is that from the host I can access containers by name that are created inside the docker in docker container.
One possible updgrade thing that I'd like to have is to configure everything with docker and don't add custom stuff into the host but by now I don't know how to do that and I can live with this solution
If you run your "Main docker container" with --net=host, then your configuration simplifies to:
Internet -> Host -> User Docker Container 1
-> User Docker Container 2
-> User Docker Container n
Although you probably want to use a bridge other than docker0 for the child containers (e.g., create a new bridge docker1, and start your dind Docker daemon with -b docker1).
If two users were to attempt to publish a service on the same port at the same ip address, then yes, you would have port conflicts. There are a few ways of working around this:
If you can support multiple public ip addresses on your host, then you can "assign" (in quotes because this would not be automatic) one to each container. Instead of running docker run -p 80:80 ..., you would need to make the bind ip explicit, like docker run -p 80:80:1.2.3.4. This requires people to "play nice"; that is, there is nothing to prevent someone from either forgetting to specify a bind address or from specifying the wrong address.
If you are explicitly running web services, then you may be able to use some sort of front-end proxy to map subdomain names to containers using name-based virtual host. There are several components to this process, and making it automated would probably require a little work. Doing it manually is comparatively easy (just update /etc/hosts, for example), but is fragile because when a container is restarted it will have a new ip address. Something like a dynamic dns service can help with this.
These are mostly suggestions more than solutions, but let me know if you would like more details. There are probably other ways of cracking this particular nut, so hopefully someone else will chime in.

How to connect docker's container with pipeline

I would like to have custom server which listens inside docker's container (e.g on TCP 192.168.0.1:4000). How to send data in and out from outside of the container. I don't want to use host ports for bridging. I would rather use pipelines or something which not take host network resources. Please show me full example of docker command.
You can use Docker volumes.
You start your container as
docker run -v /host/path:/container/path ...
and then you can pipe data to files in /host/path and they will be visible in /container/path and vise versa.
As long as your server's clients are docker containers as well you don't need to expose any host ports:
docker run --name s1 -d -p 3000 myserver
docker run -d --link s1:serverName client
Now you can reach your server from the client container at serverName:3000.
UPDATE: I just saw that you want to be able to send data from outside any containers. You can still use the same approach, depending on your use case/data volume. Every time you want to send data, create a container that sends it. Using the cli it might look like:
echo "Lots of data" | docker run --rm --link s1:serverName client
Client would have to read from stdin and send the data to serverName:3000. After it's finished it will be automatically removed.
I don't think what you're asking for makes sense. Let's say you use a UNIX pipe to capture standard output from a docker container.
$ docker run --rm -t busybox dd if=/dev/urandom count=1 > junk
$ du -hs junk
4.0K junk
If your docker client is connected to the docker host via tcp, of course that traffic uses the hosts's networking stack. It uses a method called hijacking to transport data on the same socket as the http-ish connection between the client and host.
If your docker client is connected to the host via a unix socket, then your client is on the host, and that pipeline is not using the tcp stack. But you still can't transport that data off the host without using the host's networking.
So using the networking stack is unavoidable if you want to get data from the host. That said, if your criteria is just to avoid allocating additional ports, pipelines do allow you to use the original docker host socket instead of creating new ports. But pipelines aren't the same as a tcp socket, so your application needs to be designed to understand standard input and output.
One approach that grants you access to an already-created container's internally-exposed ports is the ambassador container-linking pattern.

Coreos security

I'm playing with coreos and digitalocean, and I'd like to start allowing internal communication between my containers.
I've got private networking set up for all the hosts, and now I'd like to ensure that some containers only open ports to localhost and to the internal interface.
I've explored a lot of options for this, but none of them seem satisfactory:
Using the '-p', I can ensure docker binds to the local interface, but this has two downsides:
I can't easily test services by SSHing in, because that traffic originates from localhost
I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
I tried using flannel, but it doesn't make the traffic private (or I didn't set it up right)
I considered using iptables on the containers to prevent external access, but that doesn't seem as secure
I tried using iptables on the coreos hosts, but ... it's tricky, and I couldn't get it working.
When I tried to configured iptables on the host, I used the method here: https://docs.docker.com/articles/networking/#communication-between-containers-and-the-wider-world, by adding a DROP rule to the docker chain, but it didn't work, and packets still got through
So what's the best approach, and I'll invest time in making it work.
Overall, I guess I need to find something that I can:
Roll out to all the hosts reliably
Something that is reasonably flexible going forward
Something that allows for 'edge machines' which are accessible from the wider internet.
Solution
I'll go into how I ended up solving this. Thanks to larsks for their help. In the end, their approach was the correct one. It's tricky on coreos, because there aren't really stable addresses, like larsks assumes. The whole point of coreos it to be able to forget about ip addresses.
I solved this by finding a not-too-bad way to inject the ip address into the command in the service file. The tricky thing about this is that it doesn't really support a lot of the shell features I expected. What I wanted to do was to assign the ip address of the machine to a variable then inject it into the command:
ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');
/usr/bin/docker run -p $ip:7000:7000 ...
But, as mentioned, that doesn't work. So what to do? Get the shell!
ExecStart=/usr/bin/sh -c "\
export ip=$(ifconfig eth1 | grep -o 'inet [0-9]*\.[0-9]*\.[0-9]*\.[0-9]*' | grep -o '[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*');\
echo $ip;\
/usr/bin/docker run -p $ip:7000:7000"
I hit a few problems along the way.
I'm pretty sure there aren't newlines in that command, so I had to add the ';' characters
when you test the above bash -c command in a shell, it'll have very different effects to when systemd does it. In the shell you need to escape the '$' characters, while in systemd config files, you don't.
I included the echo so that I could see what the command thought the ip was.
When I was doing all this, I actually inserted a small webserver to the docker image, so that I could just test using curl.
Downsides of this approach is that it's tied to the way ifconfig works, and ipv4. In fact, this approach doesn't work on my linux mint laptop, where ifconfig produces differently formatted output. The important lesson here is to output things in yaml or json, so that shell json tools can access things more easily.
Instead of grep-ping the IP address, you can use the environment files to get the IP address (both public and private) of the host the service gets scheduled on. This allows you to bind your container ports to either public or private ports in a simple way.
Like so:
[Service]
EnvironmentFile=/etc/environment
ExecStart=/usr/bin/docker run --name myservice -p \
${COREOS_PUBLIC_IPV4}:80:80 \
${COREOS_PRIVATE_IPV4}:3306:3306 \
ubuntu /bin/bash
I've got private networking set up for all the hosts, and now I'd like
to ensure that some containers only open ports to localhost and to the
internal interface.
This is exactly the behavior that you get with the -p option when you specify an ip address. Let's say I have a host with two external interfaces, eth0 (with address 10.0.0.10) and eth1 (with address 192.168.0.10), and the docker0 bridge at 172.17.42.1/16.
If I start a container like this:
docker run -p 192.168.0.10:80:80 -d larsks/mini-httpd
This will start a container that is accessible over the eth1 interface at 192.168.0.10, port 80. This service is also accessible -- from the host on which the container is located -- at the address assigned to the container on the docker0 network. This would be something like 172.17.0.39, port 80.
This seems to meet your goals:
The container port is exposed over the "private" eth1 interface.
The container port is accessible from the host.
I can't easily test services by SSHing in, because that traffic originates from localhost.
If you were running ssh inside a container, you would ssh to it at the "internal" address assigned by Docker. But if you are running ssh inside your containers, you may want to consider not doing that and rely on tools like docker exec instead.
I need to write somewhat hacky shell scripts to start my services, in order to inject the address of the machine that the container is running on
With this solution, there is no need to inject the machine ip into the container.

Resources