Spring Cloud with Consul for High Availability - docker

I have a setup where I am deploying a spring-cloud-consul application from within a docker swarm overlay network. In my overlay network I have created consul images on each node. When I spin up the spring-cloud-consul application I have to specify the host name of the consul agent it should talk to such as "discovery" so it can advertise itself and query for service discovery. The issue here is that every container then is querying the same consul agent. When I remove this particular consul agent the Ribbon DiscoveryClient seems to rely on its own cache rather than use one of the other consul nodes.
What is the proper way of starting up a micro service application using spring-cloud-consul and consul such that they are not reliant on one fixed consul agent.
Solutions I have thought of trying:
Having multiple compose files and which specify different consul agents.
Somehow having the docker image identify the node it is on and then set itself to use the consul agent local to that node. (Not sure how to accomplish this yet.)
Package a consul agent with the spring-boot application.
Thank you for your help.

The consul agent must run on every node in the cluster. It is not necessary to run the consul agent inside every docker container, just on every node. You have the choice of installing the consul agent on every node, or running the consul agent in a docker container on every node.
For the consul agent in a docker container solution you will need to ensure you have the consul agent container running before other containers are started.
For details on running the consul agent in client mode in a docker container see: https://hub.docker.com/_/consul/ and search for Running Consul Agent in Client Mode. This defines the agent container with --net=host networking, so the agent behaves like it is installed natively, when it is actually in a docker container.

Related

how can I connect two docker containers with nomad

I built two docker applications that communicate with each other using the docker network, but when I tried to run those applications using nomad. The problem within nomad is that the container name is not configurable and gives the container a random name. So I can't add those containers to the docker network and have them know each other with their specific names.
So how can I run two or more docker containers in the same docker network using nomad?
I'm aware of few approaches. First one works with nomad only, the others assume that consul is deployed as well.
Place both containers in the same task group. Nomad will locate them always on the same node and you can access address via Nomad env variables NOMAD_IP_<label>, NOMAD_PORT_<label> or NOMAD_ADDR_<label>.
Register the server application (docker container) in the consul service registry with nomad service stanza. You can then use nomad template stanza in "client" application to render config. Example/doc is here.
Setup consul connect (service mesh) in your deployment.
You could use consul DNS interface. Consul can work as a DNS server and every service is resolvable at <service_name>.service.<dc>.consul (doc). But you have to configure your servers to use consul DNS (doc).
Approach 1 is the easiest but has huge limitation (the same node). Approach 2 worked for me well for several years. Nomad is that intelligent that it will reload/restart your client IP should the server IP/port change.

Connecting from inside a swarm container to another manager host

I've got a Docker swarm cluster where the manager nodes are in constant 'drain' mode, e.g. no container will ever run on them.
Now I'm running Jenkins in a container on a worker node and I'd like Jenkins to be able to deploy images to the swarm cluster.
My reasoning so far:
Mounting /var/run/docker.sock is obviously not an option as the docker manager and Jenkins container are on different hosts and the local docker is not a swarm manager.
Connecting from the Jenkins container to the local docker host using tcp has the same issues
Adding the Jenkins container to the --network host, seems not to be possible: a container cannot also be in an overlay network at the same time
I assume this is quite a common use case, yet I haven't found a solution and maybe someone here has an idea.
Thanks!

How to Distribute Jenkins Slave Containers Within Docker Swarm

I would like to have my Jenkins master (not containerized) to create slaves within a container. So I have installed the docker plugin into jenkins, created a docker server, configured and jenkins does indeed spin up a slave container fine after the job creation.
However, after I have created another docker server and created a swarm out of two of them and tried running jenkins jobs again it have continued to only deploy containers on the original server(which is now also a manager). I'd be expecting the swarm to balance the load and to distribute the newly created containers evenly across the swarm. What am I missing?
Do I have to use a service perhaps?
Docker images by themselves are not load balanced even if deployed in a swarm. What you're looking for would indeed be a Service definition. Just be careful because of port allocation. If you're deploying your jenkins slaves to listen on port 80, etc, all swarm hosts will listen on port 80 and mesh route to the containers.
Basically means you couldn't deploy anything else to port 80 on those hosts. Once that's done, however, any requests to the hosts would be load balanced to the containers.
The other nice thing is that you can dynamically change the number of replicas with service update
docker service update JenkinsService --replicas 42
While 42 may be extreme, you could obviously change it :)
As of that moment there was nothing I could find in the swarm that would help me to manipulate container distribution across the swarm nodes.
Ended up using a more flexible kubernetes for that purpose. I think marathon is capable of that as well.

Docker swarm, Consul and Spring Boot

I have 6 microservices packed in docker containers. On every swarm node, i have installed consul agent, binded to host ip, and client in 0.0.0.0 mode.
All microservices are in docker-compose file which I am running from Swarm manager.
Microservices are written in Java and in bootstrap.yml I must to specify consul agent endpoint. Possible choices are:
localhost
${HOSTIP} environment variable
Problems:
- localhost is not localhost of host, but container localhost, and I don't have consul agent on container localhost but host.
- ${HOSTIP} in compose file i have to supply this env var. But, I don't know where Swarm MAnager will schedule microservice start so I cannot know which IP address will be used.
I tried to expose on each node host ip address but since i am running compose from manager, it will not read this variable.
Do you have any proposal how to solve this? I have consul cluster, 3 managers and 3 nodes. on each manager and node i have consul agent started (as docker container). No matter what type of networking i am using, i am not able to start up microservice. I started consul as --net=host and --net=bridge, but this is not working.
Is there anyone with some idea?
Thanks ahead.
So you are running consul in containers also, right? Is it possible in your setup to link containers? So you could start the consul containers as "consul" on each host and link your microservices to it. Linked containers get a hosts entry and so the consul service should be reachable at "consul:8500" from within your services.
Edit: If you are using the official Consul Docker image from Hashicorp, you can configure the client address to 0.0.0.0, this should make the consul API available to the other containers running on the host.
Let me answer my own Q: This is not a way we want to do this, I mean, we cannot put some things in Swarm and some thing outside Swarm with expectation that it will work. It will not. Consul as a service discovery cannot be used outside Swarm, too. Simple answer would be to use Docker Orchestration and Service discovery and not to involve Consul. If someone is using Swarm, everything should be in overlay networks (rabbit, redis, elk and so on)...

Consul/Registrator architecture - do I need a separate Consul agent on each VM?

I'm trying to use Consul and Registrator to pick up microservices in various VMs but I think I'm not quite getting something. I understand that Registrator auto-registers containers with Consul. So I was thinking that I'd have one VM that runs Consul, and then for each microservice I'd have a VM with Registrator + the microservice.
However, I'm unable to get Registrator to talk to the Consul agent in a separate VM. Looking more closely at suggested architecture, it seems that I need a separate Consul agent on each VM. Am I understanding that right? If so, why? Shouldn't Registrator just be able to forward the container info to a Consul agent on any VM?
Also, do I need to run Registrator on the VM with the Consul agent and servers?
you need to have a consul agent on every VM that provides a service so it can communicate info with the consul server
This blog post has nice information :
Consul Architecture
Every node that provides services to Consul runs a Consul agent. The agent is responsible for checking the health of the services on the node as well as for the node itself. The agents talk to one or more Consul servers
Registrator Agent
Registrator agent can automatically register/deregisters services for ECS tasks or services based on published ports and metadata from the container environment variables defined in the ECS task definition
so the 2 are complementary and needed to be deployed for each service (name) you will deploy

Resources