Reload docker stack network configuration - docker

I have multiple docker stacks that are connected over the same network. If I restart one of the stacks the internal ip addresses of the related stack container seems to change. That results in wrong service name resolutions in other stacks and containers. It seems that the internal docker name service doesn't recognize the network change.
If try to access other containers with ping from a stack container shell by the service name (for example ping my_stack_my_container_name) I got a successful return from a wrong IP address. When I use the full container name instead (ping my_stack_my_container_name.134.134234234123) the return comes from the right IP.
Is there any way to trigger a reload of the stack networking/name service?

are you sure it's not reloaded? check if the docker containers (that should connect to reloaded container) are not caching DNS query results. I had same issue in haproxy config and I placed following lines to force haproxy to hold values only for 1s:
resolvers docker
# well known docker dns server address
nameserver dns 127.0.0.11:53
#HAProxy will hold name-ip mapping for 1s, so for each request new container ip will be resolved, balancing load
hold valid 1s
(...)
backend stackName_app_backend
server stackName_app_service stackName_ServiceName:80 resolvers docker check

Related

How do I make host.docker.internal work with custom dns configuration enabled?

I have docker compose running with several containers. One of those containers is a dns server running bind. In my docker daemon configuration I specify the dns like this:
"dns" : [
"10.1.1.8", /* static ip address of my dockerized bind container defined in compose */
"x.x.x.x", /* my companies internal vpn dns */
"8.8.8.8" /* google dns */
]
This all works fine. My containers in the compose file will use the bind server running on 10.1.1.8 for dns lookup and then fall back on my companies internal dns and lastly googles dns for external websites.
Docker provides a special dns host.docker.internal which should point at the host IP (lets say you want docker containers to connect to services running locally but not on docker). I want to use this in a few containers which should allow the container to reference the host IP address without hardcoding an IP which can change. In fact docker inserts this value into the hosts file (windows/system32/driver/etc/host) on the host operating system and updates it when you host IP gets assigned a new dns.
The issue is docker uses dns to resolve "host.docker.internal". When using my custom dns configuration in the daemon it breaks things and I get issues reaching the host os service. I spent 2 hours debugging this issue till I realized host.docker.internal starts working only when I delete the dns configuration from the daemon config. Is there any way to make docker resolve the dns correctly and still use custom dns bind server on the same machine? Can I somehow update the daemon dns to also point at some docker dns ip address?
Have you considered to rely on Docker Compose and how it helps defining custom DNS addressing policies.
I provide you the link to the Compose DNS configuration official guide.

Make the DNS server of docker container another docker container running DNSmasq

I have a set of docker containers created with docker compose, which creates a "user-defined" bridge network.
One of these docker containers is running DNSmasq so we can define custom (internal) domain names to point to local IPs.
Trouble is, none of the other docker containers can resolve these domain names. I think the issue is that I can't get the docker DNS to forward its requests to the docker container running DNSmasq (i.e. it doesn't even know it exists).
As a test, I did a docker network inspect <network created by docker compose> and noted the IP address of the DNSmasq container. Then, in one of the other containers' /etc/resolv.conf, I set nameserver to that IP address. Then I can resolve all these internal domains names.
Sadly, putting dnsmasq in there doesn't work, despite the fact that the user-defined network has automatic service discovery enabled.
It seems that one way to make this work then is to force my DNSmasq container to always have the same IP, and then make sure the other docker containers point to that as their nameserver, e.g. by defining the network explicitly in the compose file.
Is there no other way? I'd rather not have to define the entire network to replicate this automatically created one when all I want is to know the IP address of one container.

Remote Docker container by hostname

How do you access remote Docker container by its hostname?
I need to access remote Docker containers by its hostnames (or some constant IP's) for development and testing purposes. I have tried:
looking for any DNS approach (have not found any clues),
importing /ets/hosts (probably impossible),
creating tunnes (only this works but it is very time consuming).
It's the same as running any other process on a host, Docker or not Docker: you access it via the host name or IP address of the host and the port the service is listening on (the first port of the docker run -p argument). Docker containers don't have externally visible individual IP addresses any more than non-Docker HTTP or ssh daemons do.
If you do have DNS infrastructure available to you, you could set up CNAME records to resolve particular service names to the specific hosts that are running them.
One solution that may help you is some sort of service registry; in the past I've used Consul with some success. You can configure Consul with some health checks or other probes ("look for an HTTP service on port 12345 that answers GET / calls"), and it will provide its own DNS service ("okay, http://whatevername.service.consul:12345/ will reach your service on whichever hosts it happens to be running on").
Nothing in the Docker infrastructure specifically helps this. Using /etc/hosts is distinctly not a best practice: the name-to-IP mapping needs to be kept in sync across all machines and you'll start wishing you had a network service to publish it for you, which is exactly what DNS is for.

Docker doesn't resolve hostname

I need to know the hostnames (or ip addresses) of some container running on the same machine.
As I already commented here (but with no answer yet), I use docker-compose. The documentation says, compose will automatically create a hostname entry for all container defined in the same docker-compose.yml file:
Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
But I can't see any host entry via docker exec -it my_container tail -20 /etc/hosts.
I also tried to add links to my container, but nothing changed.
Docker 1.10 introduced some new networking features which include an internal DNS server where host lookups are done.
On the default bridge network (docker0), lookups continue to function via /etc/hosts as they use to. /etc/resolv.conf will point to your hosts resolvers.
On a user defined network, Docker will use the internal DNS server. /etc/resolv.conf will have an internal IP address for the Docker DNS server. This setup allows bridge, custom and overlay networks to work in a similar fashion. So an overlay network on swarm will populate host data from across the swarm like a local bridge network would.
The "legacy" setup was maintained so the new networking features could be introduced without impacting existing setups.
Discovery
The DNS resolver is able to provide IP's for a docker compose service via the name of that service.
For example, with a web and db service defined, and the db service scaled to 3, all db instances will resolve:
$ docker-compose run --rm web nslookup db
Name: db
Address 1: 172.22.0.4 composenetworks_db_2.composenetworks_mynet
Address 2: 172.22.0.5 composenetworks_db_3.composenetworks_mynet
Address 3: 172.22.0.3 composenetworks_db_1.composenetworks_mynet

Cross container communication with Docker

An application server is running as one Docker container and database running in another container. IP address of the database server is obtained as:
sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' db
Setting up JDBC resource in the application server to point to the database gives "java.net.ConnectException".
Linking containers is not an option since that only works on the same host.
How do I ensure that IP address of the database container is visible to the application server container?
If you want private networking between docker containers on remote hosts you can use weave to setup an overlay network between docker containers. If you don't need a private network just expose the ports using the -p switch and configure the addresses of the host machine as the destination IP in the required docker container.
One simple way to solve this would be using Weave. It allows you to create many application-specific networks that can span multiple hosts as well as datacenters. It also has a very neat DNS-based service discovery mechanism.
I should disclaim, I am one of Weave engineering team.
Linking containers is not an option since that only works on the same host.
So are you saying your application is a container running on docker server 1 and your db is a container on docker server 2? If so, you treat it like ordinary remote hosts. Your DB port needs to be exposed on docker server 2 and that IP:port needs to be configured into your application server, typically via environment variables.
The per host docker subnetwork is a Private Network. It's perhaps possible to have this address be routable, but it would be much pain. And it's further complicated because container IP's are not static.
What you need to do is publish the ports/services up to the host (via PORT in dockerfile and -p in your docker run) Then you just do host->host. You can resolve hosts by IP, Environment Variables, or good old DNS.
Few things were missing that were not allowing the cross-container communication:
WildFly was not bound to 0.0.0.0 and thus was only accepting requests on eht0. This was fixed using "-b 0.0.0.0".
Firewall was not allowing the containers to communication. This was removed using "systemctl stop firewall; systemctl disable firewall"
Virtual Box image required a Host-only adapter
After this, the containers are able to communicate. Complete details are available at:
http://blog.arungupta.me/2014/12/wildfly-javaee7-mysql-link-two-docker-container-techtip65/

Resources