Persistence Storage solution for docker swarm - docker

we have a 5 node docker swarm cluster with three managers and 2 workers. These are all Virtual machines running in private local enrionment. need to setup shared storage for the docker swarm.
we are looking at glusterfs, nfs as a storage solution for swarm.
need pointers and help on setting up the storage solution. please help

The docker swarm was setup on Azure cloud. We have installed azure storage driver plugin on each of the nodes. with storage account name and access key we are able to mount azure file storage to containers in swarm

Related

Can you please share the configuration for consul server and agent deploy in Docker Swarm with shared storage?

Can you please share the consul deployment in Docker swarm with shared storage? I was facing an error with the No Leader select from the masters.

How to manage networking and storage in HashiCorp nomad environment?

I'm curious how to manage networking and storage for containers (persistent data) managed by nomad and running on Docker hosts if one does not want to run k8s?
Ad storage it seems nomad has started to use CSI (Container Storage Interface) plugins, see https://www.nomadproject.io/docs/job-specification/csi_plugin
Ad network, there's support for CNI (Container Network Interface), see https://www.nomadproject.io/docs/job-specification/network#network-examples

Do replicated docker contianers in swarm mode contain multiple copies of data?

I have recently started learning docker. However when studying swarm mode I see that containers can be scaled up. What I would like to know is once you scale conatiner in replicated mode will the data within the container be replicated too ? or just fresh containers will be spawned ?
For example lets say I created mysql service initially only with 1 copy. I create and update tables in that mysql container. Later I scale it to 3, will newly spawned containers contain same table data ? Also will the data be continuously be replicated across 3 docker instances ?
A replicated service will use fresh container instances per container. Swarm does not take care about replication of persistent data to be stored in volumes.
Dependening on the volume plugin (e.g. local driver /w remote nfs shares) you are limited to read-write-once or read-write-many. Even if your volume allows read-write-many, the service replicas might not support that, for instance mysql will not work if you point n replicas to the same volume. You can leverage swarm service template variables for instance to point your volumes to different target folders of the same nfs share.
Also with swarm, you will want to have storage that needs to be reachable from all nodes, as a container can die and be re-spawned on a different node. So either you will need to use a remote share based on NFS or CIFS (see example usages nfs cifs), a storage cluster like Ceph or GlusterFS or a cloud native storage like Portworx. While you have to take care of HA for remote share solutions, data replication is build in for storage clusters and cloud native storage.
In case a containerized service itself is cluster/replica aware it is usualy better to not use the swarm replica mechanism - unless all instances can be started with the same set of parameters.

Persistent storage for WebDAV server on docker swarm?

How can I achieve a persistent storage for a WebDAV server running on several/any swarm nodes?
It's part of a docker-compose app running on my own vSphere infrastructure.
I was thinking about mounting an external NFS share from insde the containers (at the OS level, not docker volumes) but then how would that be better than having WebDAV outside the swarm cluster?
I can think of 2 options:
Glusterfs
This option is vSphere independent. You can create replicated bricks and store your volumes on them. Exposing same volume to multiple docker hosts. So in case of node failure the container will get restarted on another node and has it's persistent storage with it. You can also mount the persistent data on multiple containers.
There is one catch: Same diskspace will be consumed on multiple nodes.
Docker-Volume-vSphere
This option requires vsphere hosts. You can create docker volumes on vmfs datastores. they will be shared between docker hosts (virtual machines). So in case of failure the container restarts on another node and has persistent data available. Multiple containers can share a single volume.

Share docker images between hosts with NFS

I'building a mesosphere infrastructure on AWS instances with 3 master servers (running zookeeper, mesos-master, marathon and haproxy) and N slaves (running mesos-slave and docker).
If I run the same container on different slaves marathon downloads on each slave the same image. I would like to share one single nfs export (say on master1) and mount it on every slave in order to have a unique storage for the images.
Im using Ubuntu on the EC2 instances, so the storage driver used by default is device-mapper. I set up the slaves to mount /var/lib/docker/devicemapper and /var/lib/docker/graph but it ends up with this error: "stale NFS file handle"
What I would like to understand is:
There is a way to do it using a different storage driver?
In any case is the docker daemon doing some look on the files in this directory?
Is my approach wrong or possible leading into "cconcurrency access issues?
Instead of using NFS to expose the backing file system, I think it would be easier to set up docker-registry (with a volume on the master1, so the data is persisted there) and on the other nodes pull images via docker protocol by e.g. docker pull master1:5000/image:latest

Resources