We have a swarm running docker 1.13 to which I need to add 3 more nodes running docker 17.04.
Is this possible or will it cause problems?
Will it be possible to update the old nodes without bringing the entire swarm down?
Thanks
I ran into this one myself yesterday and the advice from the Docker developers is that you can mix versions of docker on the swarm managers temporarily, but you cannot promote or demote nodes that don't match the version on all the other swarm managers. They also recommended upgrading all managers before upgrading workers.
According to that advice, you should upgrade the old nodes first, one at a time to avoid avoid bringing down the cluster. If containers are deployed to those managers, you'll want to configure the node to drain with docker node update --availability drain $node_name first. After the upgrade, you can bring is back into service with docker node update --availability active $node_name.
When trying to promote a newer node into an older swarm, what I saw was some very disruptive behavior that wasn't obvious until looking at the debugging logs. The comments on this issue go into more details on Docker's advice and problems I saw.
Related
We have a set of 3 managers and 3 workers in a docker swarm cluster (community edition) running on RHEL 8.1 in a DMZ. We have a similar like to like set up in a non prod environment where we don't have issues when we patch the underlying VMs to latest RHEL 8.x versions including the docker version upgrade to the latest versions.
But any time we try patching the production cluster VMs, even though the swarm on the surface comes back up fine and we see all the services and tasks running, but for some weird reason the docker swarm looses the docker ingress load balancing capability. We have tried upgrading several different ways and many times, but every time we end up with same result and we have had to revert.
Can any one please shed some light into where to look and why this could be happening ?
Thanks in advance,
ethtool -K <interface> tx off
Fixed it for us, see: Docker Swarm Overlay Network ICMP Works, But Not Anything Else
I am fairly new to docker and docker swarm.
Recently I got a request from a business client that has 3 servers running cPanel/WHM with some wordpress installed on each server.
He wants to replicate each server 3 times for high availability, having a total of 9 nodes in the network.
My question is, what happens if, for example, one new post is added to one of the wordpress sites? How can I make that change to propagate to the other nodes that are replicas of the main one where the change was made?
My assumption is to deploy cPanel in a container having all data in volumes, use docker swarm to replicate it and finally use another software to sync files between containers. But I am sure that there should be a better, more professional and more straightforward approach to this.
Can anyone advice me?
Thanks
I'd like to upgrade the Docker engine on my Docker Swarm managed nodes (both manager and worker nodes) from 18.06 to 19.03, without causing any downtime.
I see there are many tutorials online for rolling update of a Dockerized application without downtime, but nothing related to upgrading the Docker engine on all Docker Swarm managed nodes.
Is it really not possible to upgrade the Docker daemon on Docker Swarm managed nodes without a downtime? If true, that would indeed be a pity.
Thanks in advance to the wonderful community at SO!
You can upgrade managers, in place, one at a time. During this upgrade process, you would drain the node with docker node update, and run the upgrade to the docker engine with the normal OS commands, and then return the node to active. What will not work is to add or remove nodes to the cluster while the managers have mixed versions. This means you cannot completely replace nodes with an install from scratch at the same time you upgrade the versions. All managers need to be the same version (upgraded) and then you can look at rebuilding/replacing the hosts. What I've seen in the past is that nodes do not fully join the manager quorum, and after losing enough managers you eventually lose quorum.
Once all managers are upgraded, then you can upgrade the workers, either with in place upgrades or replacing the nodes. Until the workers have all been upgraded, do not use any new features.
You can drain your node and after that upgrade your docker version,then make this ACTIVE again.
Repeat this step for all the nodes.
DRAIN availability prevents a node from receiving new tasks from the swarm manager. Manager stops tasks running on the node and launches replica tasks on a node with ACTIVE availability.
For detailed information you can refer this link :- https://docs.docker.com/engine/swarm/swarm-tutorial/drain-node/
Given that I have only one machine(high configuration laptop), can I run the entire DCOS on my laptop (for purely simulation/learning purpose). The way I was thinking to set this up was using some N number of docker containers (with networking enabled between them), where some of those from N would be masters, some slaves, one zookeeper maybe, and 1 container to run the scheduler/application. So basically the 1 docker container would be synonymous to a machine instance in this case. (since I don't have multiple machines and using multiple VMs on one machine would be an overkill)
Has this been already done, so that I can straight try it out or am I completely missing something here with regards to understanding?
We're running such a development configuration where ZooKeeper, Mesos Masters and Slaves as well as Marathon runs fully dockerized (but on 3 bare metal machine cluster) on CoreOS latest stable. It has some known downsides, like when a slave dies the running tasks cannot be recovered AFAIK by the restarted slave.
I think it also depends on the OS what you're running on your laptop. If it's non-Windows, you should normally be fine. If your system supports systemd, then you can have a look at tobilg/coreos-setup to see how I start the Mesos services via Docker.
Still, I would recommend to use a Vagrant/VirtualBox solution if you just want to test how Mesos works/"feels"... Those will probably save you some headaches compared to a "from scratch" solution. The tobilg/coreos-mesos-cluster project runs the services via Docker on CoreOS within Vagrant.
Also, you can have a look at dharmeshkakadia/awesome-mesos and especially the Vagrant based setup section to get some references.
Have a look at https://github.com/dcos/dcos-docker it is quite young but enables you to do exactly what you want.
It starts a DC/OS cluster with masters and agents on a single node in docker containers.
I'm building an Apache mesos cluster with 3 masters and 3 slaves. I installed docker on the slave nodes and it's able to create instances which are vissible in Marathon. Now i tried to install the HAproxy server on top of it but that didn't worked out that well so I deleted it.
The problem is, since then i'm only able to scale my application to a maximum of 3 instances, the exact number of nodes When I want to scale to 5, there are 2 instances that are stuck at the 'deploying' stage.
Does anyone know how to fix this issue so i'm back able to create more instances?
Thank you
To perform that, you trully need to setup Marathon ServiceDiscovery with HAProxy as unknown ports on the same slave machine will be binded to your containers.
First, install HAProxy on every slave. If you need SSL, you will need to make build HAProxy to support SSL.
Then, when HAProxy service is running, you need to follow this very well explain tutorial to enable Marathon service discovery on every Slave
HAProxy marathon Service discovery
Pay well attention to the tutorial, it is very well explained and straightforward.