I am currently trying to setup a Docker cluster in the following way:
NodeA: SwarmManager1 + Consul1
NodeB: SwarmManager2 + Consul2
NodeC: SwamNode1(advertising to Consul1) + Consul3
NodeD: SwarmNode2(adverting to Consul2)
I made some HA testing and found the follwing behavior:
I have restarted NodeB while monitoring the docker cluster info and I noticed that SwarmNode2 was disconnected from the cluster during the reboot time.
The explanation I have is that because Consul2 goes down and Node2 is configured to connect to that same Consul it becomes unavailable on the cluster perspective.
What is the correct way to setup the discovery service for the Swarm containers in order to avoid this problem?
I suggest creating a consul cluster, preferably stand-alone on different nodes.
Once the cluster is created, all consul clients should continue functioning properly as long as quorum is maintained.
I also suggest giving multiple consul server addresses with the -join flag, to ensure the agent will be able to rejoin in case it restarts while some of the consul servers are down.
Related
I am currently attempting a Kafka cluster deployment in Docker Swarm. Kafka does not work with the replica feature of Swarm because each Kafka broker (node) needs to be configured and reachable individually (i.e. no load balancer in front of it). Therefore, each broker is configured as an individual service with replicas=1, e.g. kafka1, kafka2 and kafka3 services.
Every now and then the configuration or image for the Kafka brokers will need to be changed via a docker stack deploy (done by a person or CI/CD pipeline). Then Swarm will recreate all containers simultaneously and as a result, the Kafka cluster is temporarily unavailable, which is not acceptable for a critical piece of infrastructure that is supposed to run 24/7. And I haven't even mentioned the Zookeeper cluster underneath Kafka yet, for which the same applies.
The desired behavior is that Swarm recreates the container of the kafka1 service, waits until it has fully started up and synchronized with the other brokers (all topic partitions are in sync), and only then Swarm restarts kafka2 service and so on.
I think I can construct a health check within the Kafka Docker image that would tell the Docker engine when the Kafka broker is fully synchronized. But how make Swarm perform what amounts to a rolling update across service boundaries? It ignores the depends_on setting that Docker Compose knows, and rolling update policies apply to service replicas only. Any idea?
I have been building a distributed load testing application using Kubernetes and Locust (similar to this).
I currently have a multi-node cluster running on bare-metal (running on an Ubuntu 18.04 server, set up using Kubeadm, and with Flannel as my pod networking addon).
The architecture of my cluster is as follows:
I have a 'master instance' of the Locust application running on my master node.
I have 'slave instances' of the Locust application running on all of my other nodes. These slave instances must be able to bind to a port (5558 by default) of the master instance.
As of now, I don't believe that that is happening. My cluster shows that all of my deployments are healthy and running, however I am unable to access the logs of any of my slave instances which are running on nodes other than my master node. This leads me to believe that my pods are unable to communicate with each other across different nodes.
Is this an issue with my current networking or deployment setups (I followed the linked guides pretty-much verbatim)? Where should I start in debugging this issue?
How slaves instances try to join the master instance. You have to create master service (with labels) to access master pod. Also, make sure your SDN is up and master is reachable to slave instances. You can test using telnet to master pod IP from slave instances.
Based on your description of the problem I can guess that you have a connection problem caused by firewall or network misconfiguration.
From the network perspective, there are requirements mentioned in Kubernetes documentation:
all containers can communicate with all other containers without NAT
all nodes can communicate with all containers (and vice-versa) without NAT
the IP that a container sees itself as is the same IP that others see it as
From the firewall perspective, you need to ensure the cluster traffic can pass the firewall on the nodes.
Here is the list of ports you should have opened on the nodes provided by CoreOS website:
Master node inbound: TCP: 443 from Worker Nodes, API Requests, and End-Users
UDP: 8285,8472 from Master & Worker Nodes
Worker node inbound: TCP: 10250 from Master Nodes
TCP: 10255 from Heapster
TCP: 30000-32767 from External Application Consumers
TCP: 1-32767 from Master & Worker Nodes
TCP: 179 from Worker Nodes
UDP: 8472 from Master & Worker Nodes
UPD: 179 from Worker Nodes
Etcd node inbound: TCP: 2379-2380 from Master & Worker Nodes
see ip forwarding is enabled on all the nodes.
# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
if not enable it like this and test it.
echo 1 > /proc/sys/net/ipv4/ip_forward
I have set up a three-node quorum network using docker and if my network crashes, and I only have information of one of the node. Do I want to recover that node using binary? also, the blocks in the new network should be in sync with others. Please guide how is it possible?
I assume you’re using docker swarm. The clustering facility within the swarm is maintained by the manager nodes. If for any reason the manager leader becomes unavailable and no enough remaining managers to reach quorum and elect a new manager leader, a quorum is lost and no manager node is able to control the swarm.
In this kind of situations, it may be necessary to re-initialize the swarm and force the creation of a new cluster using the command on the manager leader when it is brought online again:
# docker swarm init --force-new-cluster
This removes all managers except the manager the command was run from. The good thing is that worker nodes will continue to function normally and the other manager nodes should resume functionality once the swarm has been re-initialized.
Sometimes it might be necessary to remove manager nodes from the swarm and rejoin them to the swarm.
But note that when a node rejoins the swarm, it must join the swarm via a manager node.
You can always monitor the health of manager nodes by querying the docker nodes API in JSON format through the /nodes HTTP endpoint:
# docker node inspect manager1 --format "{{ .ManagerStatus.Reachability }}"
# docker node inspect manager1 --format "{{ .Status.State }}"
Also, make it a practice to perform automate the backup of docker swarm config directory /var/lib/docker/swarm to easily recover from disaster.
I am currently trying to use Docker Swarm to set up our application (consisting of both stateless and stateful services) in a highly available fashion on a three node cluster. With "highly available" I mean "can survice the failure of one out of the three nodes".
We have been doing such installations (using other means, not Docker, let alone Docker Swarm) for quite a while now with good success, including acceptable failover behavior, so our application itself (resp. the services that constitute it) has/have proven that in such a three node setup it/they can be made highly available.
With Swarm, I get the application up and running successfully (with all three nodes up) and have taken care that I have each service configured redundantly, i.e., more than one instance exists for each of them, they are properly configured for HA, and not all instances of a service are located on the same Swarm node. Of course, I also took care that all my Swarm nodes joined the Swarm as manager nodes, so that anyone of them can be leader of the swarm if the original leader node fails.
In this "good" state, I can reach the services on their exposed ports on any of the nodes, thanks to Swarm's Ingress networking.
Very cool.
In a production environment, we could now put a highly-available loadbalancer in front of our swarm worker nodes, so that clients have a single IP address to connect to and would not even notice if one of the nodes goes down.
So now it is time to test failover behavior...
I would expect that killing one Swarm node (i.e., hard shutdown of the VM) would leave my application running, albeit in "degraded" mode, of course.
Alas, after doing the shutdown, I cannot reach ANY of my services via their exposed (via Ingress) ports anymore for a considerable time. Some do become reachable again and indeed have recovered successfully (e.g., a three node Elasticsearch cluster can be accessed again, of course now lacking one node, but back in "green" state). But others (alas, this includes our internal LB...) remain unreachable via their published ports.
"docker node ls" shows one node as unreachable
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER
STATUS
kma44tewzpya80a58boxn9k4s * manager1 Ready Active Reachable
uhz0y2xkd7fkztfuofq3uqufp manager2 Ready Active Leader
x4bggf8cu371qhi0fva5ucpxo manager3 Down Active Unreachable
as expected.
What could I be doing wrong regarding my Swarm setup that causes these effects? Am I just expecting too much here?
I have a setup where I am deploying a spring-cloud-consul application from within a docker swarm overlay network. In my overlay network I have created consul images on each node. When I spin up the spring-cloud-consul application I have to specify the host name of the consul agent it should talk to such as "discovery" so it can advertise itself and query for service discovery. The issue here is that every container then is querying the same consul agent. When I remove this particular consul agent the Ribbon DiscoveryClient seems to rely on its own cache rather than use one of the other consul nodes.
What is the proper way of starting up a micro service application using spring-cloud-consul and consul such that they are not reliant on one fixed consul agent.
Solutions I have thought of trying:
Having multiple compose files and which specify different consul agents.
Somehow having the docker image identify the node it is on and then set itself to use the consul agent local to that node. (Not sure how to accomplish this yet.)
Package a consul agent with the spring-boot application.
Thank you for your help.
The consul agent must run on every node in the cluster. It is not necessary to run the consul agent inside every docker container, just on every node. You have the choice of installing the consul agent on every node, or running the consul agent in a docker container on every node.
For the consul agent in a docker container solution you will need to ensure you have the consul agent container running before other containers are started.
For details on running the consul agent in client mode in a docker container see: https://hub.docker.com/_/consul/ and search for Running Consul Agent in Client Mode. This defines the agent container with --net=host networking, so the agent behaves like it is installed natively, when it is actually in a docker container.