How can a container enumerate hosts available on the network? - docker

Use case: haproxy container running with docker compose. I want to have the container discover which hosts are available in order to recreate haproxy config and reload it.
I know the there will be one or more containers named server1 and server2 available. From inside the haproxy container I can query dns for server1 and receive more than one IP address. Is that the only way to know when a new server1 cointainer becomes available or dies? I know I can use the docker api from python running inside a container that hast the docker host socket mapped to it, but I'm not sure that will work when running on swarm.
The perfect solution would be an api or command that let's me register an event handler that is called when a new container joins the network.

There is a solutions that you can use Registrator (https://github.com/gliderlabs/registrator), Consul and Consul Template.
Consul is a Service Discovery
Consul-Template watches Consul and updates HA Proxy config and reload it.
Registrator listens Docker Engine and update Consul if there is any container is up or down.
Please see the image:
For the full tutorial, you can refer to my blog (https://sonnguyen.ws/microservices-with-docker-swarm-and-consul/) to know how to implement it.

Related

how can I connect two docker containers with nomad

I built two docker applications that communicate with each other using the docker network, but when I tried to run those applications using nomad. The problem within nomad is that the container name is not configurable and gives the container a random name. So I can't add those containers to the docker network and have them know each other with their specific names.
So how can I run two or more docker containers in the same docker network using nomad?
I'm aware of few approaches. First one works with nomad only, the others assume that consul is deployed as well.
Place both containers in the same task group. Nomad will locate them always on the same node and you can access address via Nomad env variables NOMAD_IP_<label>, NOMAD_PORT_<label> or NOMAD_ADDR_<label>.
Register the server application (docker container) in the consul service registry with nomad service stanza. You can then use nomad template stanza in "client" application to render config. Example/doc is here.
Setup consul connect (service mesh) in your deployment.
You could use consul DNS interface. Consul can work as a DNS server and every service is resolvable at <service_name>.service.<dc>.consul (doc). But you have to configure your servers to use consul DNS (doc).
Approach 1 is the easiest but has huge limitation (the same node). Approach 2 worked for me well for several years. Nomad is that intelligent that it will reload/restart your client IP should the server IP/port change.

How docker containers expose services?

I'm deploying a stack of services through the command:
docker stack deploy -c <docker-compose.yml> <stack-name>
And I'm mapping ports of one of these services on docker compose with ports: 8000:8000.
The network driver being used is overlay.
I can access these services via localhost:8000, via Peers IP(?).
When I inspect the network created, I can see the local IPs of each container (for instance, 10.0.1.2). But Where is the external IP of container (the one like 172.0. ...) ?
I am running these docker container on a virtual machine ubuntu.
How can I access the services running on containers from other nodes running on other networks? Isn't possible to access via hostIP:port?
If so, how do I get the host IP? When I do docker-machine IP I get "host is not running".
[EDIT: I wasn't doing port mapping between the host and the VM in virtualbox. Now it works!]
Whats the best way to communicate between containers on the same swarm?
Thanks
Whats the best way to communicate between containers on the same swarm? Through name discovery?
In general if you communicate between containers you should use the container/service name.
And for your other problem you probably wan't a reverse proxy like nginx or traefik.

How do I combine docker-compose scaling and load balanced port exposure?

If you tell docker-compose to scale a service, and do NOT expose its ports,
docker-compose scale dataservice=2
There will be two IPs in the network that the dns name dataservice will resolve to. So, services that reach it by hostname will load balance.
I would also like to do this to the edge proxy as well. The point would be that
docker-compose scale edgeproxy=2
Would cause edgeproxy to resolve to one of 2 possible IP Addresses.
But the semantics of exposing ports is wrong for this. If I expose:
8443:8443
Then it will try to bind each edgeproxy to be bound to host 8443. What I want is more like:
0.0.0.0:8443:edgeproxy:8443
Where when you try to come into the docker network via host 8443, it randomly selects an edgeproxy:8443 IP to bind the incoming TCP connection to.
Is there an alternative to just do a port-forward? I want a port that can get me in to talk to any ip that will resolve as edgeproxy.
This is provided by swarm mode. You can enable a single node swarm cluster with:
docker swarm init
And then deploy your compose file as a stack with:
docker stack deploy -c docker-compose.yml $stack_name
There are quite a few differences from docker compose including:
Swarm doesn't build images
You manage the target state with docker service commands, trying to stop a container with docker stop won't work since swarm will restart it
The compose file needs to be in a v3 syntax
Networks will be an overlay network, and not attachable by containers outside of swarm, by default
One of the main changes is that exposed ports are published on an ingress network managed by swarm mode, and connections are round robin load balanced to your containers. You can also define a replica count inside the compose file, eliminating the need to run a scale command.
See more at: https://docs.docker.com/engine/swarm/

Docker swarm, Consul and Spring Boot

I have 6 microservices packed in docker containers. On every swarm node, i have installed consul agent, binded to host ip, and client in 0.0.0.0 mode.
All microservices are in docker-compose file which I am running from Swarm manager.
Microservices are written in Java and in bootstrap.yml I must to specify consul agent endpoint. Possible choices are:
localhost
${HOSTIP} environment variable
Problems:
- localhost is not localhost of host, but container localhost, and I don't have consul agent on container localhost but host.
- ${HOSTIP} in compose file i have to supply this env var. But, I don't know where Swarm MAnager will schedule microservice start so I cannot know which IP address will be used.
I tried to expose on each node host ip address but since i am running compose from manager, it will not read this variable.
Do you have any proposal how to solve this? I have consul cluster, 3 managers and 3 nodes. on each manager and node i have consul agent started (as docker container). No matter what type of networking i am using, i am not able to start up microservice. I started consul as --net=host and --net=bridge, but this is not working.
Is there anyone with some idea?
Thanks ahead.
So you are running consul in containers also, right? Is it possible in your setup to link containers? So you could start the consul containers as "consul" on each host and link your microservices to it. Linked containers get a hosts entry and so the consul service should be reachable at "consul:8500" from within your services.
Edit: If you are using the official Consul Docker image from Hashicorp, you can configure the client address to 0.0.0.0, this should make the consul API available to the other containers running on the host.
Let me answer my own Q: This is not a way we want to do this, I mean, we cannot put some things in Swarm and some thing outside Swarm with expectation that it will work. It will not. Consul as a service discovery cannot be used outside Swarm, too. Simple answer would be to use Docker Orchestration and Service discovery and not to involve Consul. If someone is using Swarm, everything should be in overlay networks (rabbit, redis, elk and so on)...

Cross container communication with Docker

An application server is running as one Docker container and database running in another container. IP address of the database server is obtained as:
sudo docker inspect -f '{{ .NetworkSettings.IPAddress }}' db
Setting up JDBC resource in the application server to point to the database gives "java.net.ConnectException".
Linking containers is not an option since that only works on the same host.
How do I ensure that IP address of the database container is visible to the application server container?
If you want private networking between docker containers on remote hosts you can use weave to setup an overlay network between docker containers. If you don't need a private network just expose the ports using the -p switch and configure the addresses of the host machine as the destination IP in the required docker container.
One simple way to solve this would be using Weave. It allows you to create many application-specific networks that can span multiple hosts as well as datacenters. It also has a very neat DNS-based service discovery mechanism.
I should disclaim, I am one of Weave engineering team.
Linking containers is not an option since that only works on the same host.
So are you saying your application is a container running on docker server 1 and your db is a container on docker server 2? If so, you treat it like ordinary remote hosts. Your DB port needs to be exposed on docker server 2 and that IP:port needs to be configured into your application server, typically via environment variables.
The per host docker subnetwork is a Private Network. It's perhaps possible to have this address be routable, but it would be much pain. And it's further complicated because container IP's are not static.
What you need to do is publish the ports/services up to the host (via PORT in dockerfile and -p in your docker run) Then you just do host->host. You can resolve hosts by IP, Environment Variables, or good old DNS.
Few things were missing that were not allowing the cross-container communication:
WildFly was not bound to 0.0.0.0 and thus was only accepting requests on eht0. This was fixed using "-b 0.0.0.0".
Firewall was not allowing the containers to communication. This was removed using "systemctl stop firewall; systemctl disable firewall"
Virtual Box image required a Host-only adapter
After this, the containers are able to communicate. Complete details are available at:
http://blog.arungupta.me/2014/12/wildfly-javaee7-mysql-link-two-docker-container-techtip65/

Resources