Is there any way to flush the docker DNS cache (internal)? - docker

I'm using Docker 18.03.1-ce and if I create a container, remove it and then re-create it, the internal DNS retains the old address (in addition to the new).
Is there any way to clear or flush the old entries? If I delete and re-create the network then that flushes it but I don't want to have to do that every time.
I create the network:
docker network create -d overlay --attachable --subnet 10.0.0.0/24 --gateway 10.0.0.1 --scope swarm -o parent=ens224 overlay1
Then create a container (SQL for this example)
docker container run -d --rm --network overlay1 --name sql -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Some_SA_Passw0rd' -p 1433:1433 microsoft/mssql-server-linux
If I create an Alpine container on the same network I can nslookup sql by name and it resolves to 10.0.0.6. No problems, so far-so-good.
Now, if I remove the SQL container and re-create it then nslookup sql shows 10.0.0.6 and 10.0.0.8. The 10.0.0.6 is the old address and no longer alive but still resolves.
The nameserver my containers are using is 127.0.0.11 which is typical for a user-created network but I haven't been able to find anything that will let me clear its cache.
Maybe I'm missing something but I had assumed the DNS entries would be torn down whenever the containers get removed.
Any insight is certainly appreciated!

I have just fixed the same problem by running containers in Docker Swarm. Seems like Swarm does something to keep DNS entries up to date. I tried to remove my application container manually using docker rm, scaled it up/down - in every case it's hostname was correctly resolved to existing IP addresses only.
If you can't use Swarm, I guess another solution would be to run a standalone service discovery tool (maybe in another container) and configure your other containers to use it as DNS server instead of a build-in one.

Like Daniele, I also have the DNS problem in Swarm mode (stack). I kill the services running on 1 node (paying attention that other instances are running on other nodes), and swarm starts recreating them. But meanwhile, the DNS gives me the wrong IP for the service name. More than that, I would expect that the DNS resolution gives me a different IP everytime but it's not the case, in time frame (a few seconds), it DNS returns the same IP for a given service, regardless the IP is valid or not.
Daniele, did you fill a bug report ?

Related

How docker process communication between different containers on default bridge on the same host?

Here is my situation:
First,I run a MySQL container(IP:172.17.0.2) on centOS;
Then I run a Nacos contanier with specified datasource(MySQL above) on the same host, but i didn't use the ip of the MySQL container, instead I used the ip of the bridge Gateway(172.17.0.1)(two containers both link to the default bridge).
What surprised me was that Nacos works well, it can query config data from MySQL container normally.
How did this happen? I have read some documention but didn't get the answer.It really confused me.
On modern Docker installations, try to avoid using the default bridge network. docker network create a network (it doesn't need any special options, but it does need to be created) and then launch your containers on --net that network. If you're using Compose, it creates a ("user bridge") network named default for you.
On your CentOS host, if you run ifconfig, you should see a docker0 interface with the 172.17.0.1 address. When you launch a container with the docker run -p option, that container is accessible via the first port number on all host interfaces, including the docker0 interface.
Meanwhile, inside a container (on the default bridge network), it sees that same IP address as the normal IPv4 gateway address (try docker run --rm busybox route -n). So, when you connect to 172.17.0.1:3306, you're connecting out to the host, and then connecting to the published port of the database container.
This isn't a totally standard way to connect between containers, though it will work. You should prefer using Docker named networks, which will let you connect to another container using the container's name without manually doing any IP-address lookups. If you really can't move off of the default bridge network, then the standard approach is to --link to the other container, but this entire path is considered outdated.

Add docker containers to hosts automatically by name

A common workflow for me is I run docker-compose up during development in a web-project, I run docker inspect repo_app_1 | grep IPAddress and then go to the ipaddress in the browser.
Instead of fetching container's IP, I want to add the name of this container with its IP to the hosts file.
What would be the best way to do that? It's certainly possible, I can think of one way -- hijack the docker and docker-compose commands so that after each execution we run a script which runs docker container's output through awk and appends it to hosts file and also manages to delete the older entries.
One possibility is to use Traefik, a Docker-aware reverse proxy that includes its own monitoring dashboard.
See for instance "Traefik on Docker for Web Developers - With bonus Let's Encrypt SSL!", from Juan Treminio, in order to register automatically your containers and access them through a pre-defined URL.
Juan describes how to solve the "port dance":
If port 80 is mapped to web-server-A you must choose another port to bind for web-server-B and web-server-C.
This can quickly get old because you must remember that http://localhost goes to A, http://localhost:81 goes toB and http://localhost:82 goes to C.
He points out:
On virtual machines this problem does not really occur because you can assign a static IP address to your servers, and bind it to your system’s hosts file (/etc/hosts).
Containers are ephemeral by nature and do not normally get created on your host’s network but rather private networks with their own random IP addresses within special ranges. However, you must edit /etc/hosts for every VM you spin up and the list grows with the number of projects you handle.
Træfik solves both of these problems, first by removing the need to use ports in URLs and second by not needing you to edit /etc/hosts at all.
A new container will register itself to the Traefik docker network (docker network create --driver bridge traefik_webgateway) with:
docker run -d --name some-mailhog \
--network traefik_webgateway \
--label traefik.docker.network=traefik_webgateway \
--label traefik.frontend.rule=Host:mailhog.localhost \
--label traefik.port=8025 \
mailhog/mailhog
The URL becomes simple http://mailhog.localhost.

Can't resolve set hostname from another docker container in same network

I've had db and server container, both running in the same network. Can ping db host by its container id with no problem.
When I set a hostname for db container manually (-h myname), it had an effect ($ hostname returns set host), but I can't ping that hostname from another container in the same network. Container id still pingable.
Although it works with no problem in docker compose.
What am I missing?
Hostname is not used by docker's built in DNS service. It's a counterintuitive exception, but since hostnames can change outside of docker's control, it makes some sense. Docker's DNS will resolve:
the container id
container name
any network aliases you define for the container on that network
The easiest of these options is the last one which is automatically configured when running containers with a compose file. The service name itself is a network alias. This lets you scale and perform rolling updates without reconfiguring other containers.
You need to be on a user created network, not something like the default bridge which has DNS disabled. This is done by default when running containers with a compose file.
Avoid using links since they are deprecated. And I'd only recommend adding host entries for external static hosts that are not in any DNS, for container to container, or access to other hosts outside of docker, DNS is preferred.
I've found out, that problem can be solved without network using --add-host option. Container's IP can be gain using inspect command.
But when containers in the same network, they are able to access each other via it names.
As stated in the docker docs, if you start containers on the default bridge network, adding -h myname will add this information to
/etc/hosts
/etc/resolv.conf
and the bash prompt
of the container just started.
However, this will not have any effect to other independent containers. (You could use --link to add this information to /etc/hosts of other containers. However, --link is deprecated.)
On the other hand, when you create a user-defined bridge network, docker provides an embedded DNS server to make name lookups between containers on that network possible, see Embedded DNS server in user-defined networks. Name resolution takes the container names defined with --name. (You
will not find another container by using its --hostname value.)
The reason, why it works with docker-compose is, that docker-compose creates a custom network for you and automatically names the containers.
The situation seems to be a bit different, when you don't specify a name for the container yourself. The run reference says
If you do not assign a container name with the --name option, then the daemon generates a random string name for you. [...] If you specify a name, you can use it when referencing the container within a Docker network.
In agreement with your findings, this should be read as: If you don't specify a custom --name, you cannot use the auto-generated name to look up other containers on the same network.

Is there a way to add a hostname to an EXISTING docker container?

I have some containers that communicate via their IP from the network docker.
I can use the option -h or --hostname when running a new container but I want to set the hostname for existing container.
Is it possible?
One way is to create network and add different container in this network.
When adding container in the network, you can use the --alias option of docker network. Like this:
Create a network:
docker network create <my-network-name>
Add containers in the network:
docker network connect --alias <hostname-container-1> <my-network-name> <container-1>
docker network connect --alias <hostname-container-2> <my-network-name> <container-2>
docker network connect --alias <hostname-container-3> <my-network-name> <container-3>
Enjoy.
So each container can see other container by the alias (the alias is used as hostname).
Generally, you would need to stop/restart a container, in order to run it again with -h (--hostname) (unless you used --net=host)
If you cannot stop the container, you can try and (in an attached bash session) edit its /etc/hostname.
The hostname is immutable once the container is created (although technically you can modify /etc/hostname).
As suggested in another answer, you cannot change the hostname by stopping or restarting the container. There are not Docker engine client parameters for the start command that affect hostname. That wouldn't make sense anyway as starting a container simply launches the ENTRYPOINT process in a container filesystem that has already been created (i.e. /etc/hostname has already been written).
It is possible to synchronize the container hostname with the host by using the --uts=host parameter when the container is created. This shares the UTS namespace. I would not recommend --net=host unless you also want to share the host network devices (i.e. bypass the Docker bridge).

How to share host network bridge when using docker in docker

I'm using the https://github.com/jpetazzo/dind docker image to have docker in docker. When starting docker containers inside the parent docker, is it possible to use the bridge of the parent docker so I can share the network between the containers inside the docker container and the parent docker container?
What I want to do is to access the containers inside the parent docker container from the host directly by IP to assign domain names to them.
UPDATE -> Main Idea
I'm upgrading a free online Java compiler to allow users to run any program using docker. So I'm using the dind (docker in docker image) to launch a main container that have inside a Java program that receive requests and launch docker containers inside of it.
So what I want to do is to give the users the option to run programs that expose a port and let them access their containers using a subdomain.
So graphically I have this hierarchy
Internet -> My Host -> Main Docker Container -> User Docker Container 1
-> User Docker Container 2
-> User Docker Container n
And what I want to do is to give the user a subdomain name to access his "User Docker Container" for example: www.user_25.compiler1.browxy.com
So he can have a program that expose a port in his "User Docker Container" and can access it using the subdomain www.user_25.compiler1.browxy.com
What confuses me is that to access the "User Docker Container" I need to access before the Main Docker Container. I'm trying to find a way to access the "User Docker Container" directly, so I thought that if the User Docker Container and the Main Docker container can share the same network I can access the User Docker Container directly from the host and assign a domain name to the "User Docker Container" IP updating the /etc/hosts file on the host.
Thanks a lot for any advice or suggestion :)
Finally I took many ideas that larsks gave me and this is what I did
Start docker in docker container with a name (--name compiler)
Execute this command in the host -> sudo route add -net 10.0.0.0 gw docker inspect --format '{{ .NetworkSettings.IPAddress }}' compiler netmask 255.255.255.0
For this to work I added a custom bridge in the docker in docker container that ensure that the ip range is 10.0.0.0/24
Now I can ping containers created inside the docker in docker container from the host
To have name resolution I installed docker-dns as larsks suggested into the docker in docker container and added the IP of it to /etc/resolv.conf in the host
The result is that from the host I can access containers by name that are created inside the docker in docker container.
One possible updgrade thing that I'd like to have is to configure everything with docker and don't add custom stuff into the host but by now I don't know how to do that and I can live with this solution
If you run your "Main docker container" with --net=host, then your configuration simplifies to:
Internet -> Host -> User Docker Container 1
-> User Docker Container 2
-> User Docker Container n
Although you probably want to use a bridge other than docker0 for the child containers (e.g., create a new bridge docker1, and start your dind Docker daemon with -b docker1).
If two users were to attempt to publish a service on the same port at the same ip address, then yes, you would have port conflicts. There are a few ways of working around this:
If you can support multiple public ip addresses on your host, then you can "assign" (in quotes because this would not be automatic) one to each container. Instead of running docker run -p 80:80 ..., you would need to make the bind ip explicit, like docker run -p 80:80:1.2.3.4. This requires people to "play nice"; that is, there is nothing to prevent someone from either forgetting to specify a bind address or from specifying the wrong address.
If you are explicitly running web services, then you may be able to use some sort of front-end proxy to map subdomain names to containers using name-based virtual host. There are several components to this process, and making it automated would probably require a little work. Doing it manually is comparatively easy (just update /etc/hosts, for example), but is fragile because when a container is restarted it will have a new ip address. Something like a dynamic dns service can help with this.
These are mostly suggestions more than solutions, but let me know if you would like more details. There are probably other ways of cracking this particular nut, so hopefully someone else will chime in.

Resources