same data volume attached to multiple container even in different host - docker

I'm able to bind a docker volume to a specific container in a swarm thanks to flocker, but now i would have multiple replicas of my server (to do load balancing) and so i'm searching something to bind the same data volume to multiple replicas of a docker service.
In flocker documentaiton i have found that
Can more than one container access the same volume? Flocker works by
creating a 1 to 1 relationship of a volume and a container. This means
you can have multiple volumes for one container, and those volumes
will always follow that container.
Flocker attaches volumes to the individual agent host (docker host)
and this can only be one host at a time because Flocker attaches
Block-based storage. Nodes on different hosts cannot access the same
volume, because it can only be attached to one node at a time.
If multiple containers on the same host want to use the same volume,
they can, but be careful because multiple containers accessing the
same storage volume can cause corruption.
Can I attach a single volume to multiple hosts? Not currently, support
from multi-attach backends like GCE in Read Only mode, or NFS-like
backends like storage, or distributed filesystems like GlusterFS would
need to be integrated. Flocker focuses mainly on block-storage uses
cases that attach a volume to a single node at a time.
So i think is no possible to do what i want with flocker.
I could use a different orchestrator (k8s) if that could help me, even if i have no experience with that.
I would not use NAS/NFS or anything distribuited filesystems.
Any suggestions?
Thanks in advance.

In k8s, you can mount volume to different Pods at the same time if technology that backs the volume supports shared access.
As mentioned in Kubernetes Persistent Volumes:
Access Modes A PersistentVolume can be mounted on a host in any way
supported by the resource provider. As shown below, providers will
have different capabilities and each PV’s access modes are set to the
specific modes supported by that particular volume. For example, NFS
can support multiple read/write clients, but a specific NFS PV might
be exported on the server as read-only. Each PV gets its own set of
access modes describing that specific PV’s capabilities.
The access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
Types of volumes that supports ReadOnlyMany mode:
AzureFile
CephFS
FC
FlexVolume
GCEPersistentDisk
Glusterfs
iSCSI
Quobyte
NFS
RBD
ScaleIO
Types of volumes that supports ReadWriteMany mode:
AzureFile
CephFS
Glusterfs
Quobyte
RBD
PortworxVolume
VsphereVolume(works when pods are collocated)

Related

What are the differences between the concepts of storage driver and volume driver in Docker?

I'm studying about Docker and I couldn't understand very well the difference between what storage drivers are used for and what volume drivers are used for.
My theory (please correct me if I'm wrong) is the storage drivers manage the way Docker deals underneath with the writable layer, can use overlay, overlay2, aufs, zfs, btrfs and so on.
Volume drivers, however, deal with volumes underneath, like it can be a local volume (in this case I think it will use a storage driver) or can be a remote one (like EBS).
Am I right?
Docker uses storage drivers to store image layers, and to store data in the writable layer of a container. Docker uses Volume drivers for write-intensive data, data that must persist beyond the container’s lifespan, and data that must be shared between containers. So, I understand Storage Drivers are used with image and container layers while Volume Drivers are used for persistent container application data. See the first three paragraphs of this Docker documentation: https://docs.docker.com/storage/storagedriver/
Docker engine volume enable engine deployments to be integrated with external storage systems such as Amazon EBS, and enable data volumes to persist beyond the lifetime of a single Docker host. Here the term 'local' in concept of Docker volume driver means the volumes esdata1 and esdata2 are created on the same Docker host where you run your container. By using other Volume plugins, e.g.,-driver=flocker. You are able to create a volume on a external host and mount it to the local host, say, /data-path.

Do replicated docker contianers in swarm mode contain multiple copies of data?

I have recently started learning docker. However when studying swarm mode I see that containers can be scaled up. What I would like to know is once you scale conatiner in replicated mode will the data within the container be replicated too ? or just fresh containers will be spawned ?
For example lets say I created mysql service initially only with 1 copy. I create and update tables in that mysql container. Later I scale it to 3, will newly spawned containers contain same table data ? Also will the data be continuously be replicated across 3 docker instances ?
A replicated service will use fresh container instances per container. Swarm does not take care about replication of persistent data to be stored in volumes.
Dependening on the volume plugin (e.g. local driver /w remote nfs shares) you are limited to read-write-once or read-write-many. Even if your volume allows read-write-many, the service replicas might not support that, for instance mysql will not work if you point n replicas to the same volume. You can leverage swarm service template variables for instance to point your volumes to different target folders of the same nfs share.
Also with swarm, you will want to have storage that needs to be reachable from all nodes, as a container can die and be re-spawned on a different node. So either you will need to use a remote share based on NFS or CIFS (see example usages nfs cifs), a storage cluster like Ceph or GlusterFS or a cloud native storage like Portworx. While you have to take care of HA for remote share solutions, data replication is build in for storage clusters and cloud native storage.
In case a containerized service itself is cluster/replica aware it is usualy better to not use the swarm replica mechanism - unless all instances can be started with the same set of parameters.

Mount rexray/ceph volume in multiple containers on Docker swarm

What I have done
I have built a Docker Swarm cluster where I am running containers that have persistent data. To allow the container to move to another host in the event of failure I need resilient shared storage across the swarm. After looking into the various options I have implemented the following:
Installed a Ceph Storage Cluster across all nodes of the Swarm and create a RADOS Block Device (RBD).
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
Installed Rexray on each node and configure it to use the RBD created above. https://rexray.readthedocs.io/en/latest/user-guide/storage-providers/ceph/
Deploy a Docker stack that mounts a volume using the rexray driver e.g.
version: '3'
services:
test-volume:
image: ubuntu
volumes:
- test-volume:/test
volumes:
test-volume:
driver: rexray
This solution is working in that I can deploy a stack, simulate a failure on the node that is running then observe the stack restarted on another node with no loss of persistent data.
However, I cannot mount a rexray volume in more than one container. My reason for doing is to use a short lived "backup container" that simply tars the volume to a snapshot backup while the container is still running.
My Question
Can I mount my rexray volumes into a second container?
The second container only needs read access so it can tar the volume to a snapshot backup while keeping the first container running.
Unfortunately the answer is no, in this use case rexray volumes cannot be mounted into a second container. Some information below will hopefully assist anyone heading down a similar path:
Rexray does not support multiple mounts:
Today REX-Ray was designed to actually ensure safety among many hosts that could potentially have access to the same host. This means that it forcefully restricts a single volume to only be available to one host at a time. (https://github.com/rexray/rexray/issues/343#issuecomment-198568291)
But Rexray does support a feature called pre-emption where:
..if a second host does request the volume that he is able to forcefully detach it from the original host first, and then bring it to himself. This would simulate a power-off operation of a host attached to a volume where all bits in memory on original host that have not been flushed down is lost. This would support the Swarm use case with a host that fails, and a container trying to be re-scheduled.
(https://github.com/rexray/rexray/issues/343#issuecomment-198568291)
However, pre-emption is not supported by the Ceph RBD.
(https://rexray.readthedocs.io/en/stable/user-guide/servers/libstorage/#preemption)
You could of course have a container that attaches the volume, and then exports it via nfs on a dedicated swarm network, the client containers could then access it via nfs

Serving assets using nginx, kubernetes and docker

There is a need to serve assets such as images/video/blobs that are going to be used in a website and a set of mobile apps. I am thinking of running following setup:
Run nginx in a docker container to serve assets
Run a side car container with a custom app which will pull these assets from a remote location and put in to a 'local storage'. Nginx gets assets from this local storage. Custom app will keep local storage up to date.
To run this setup I need to make sure that the pods that runs these two containers have a local storage which is accessible from both containers. To achieve this, I am thinking of restricting these pods to a set of nodes in the kubernetes cluster and provision local persisted volumes in these nodes. Does this make sense?
To achieve this, I am thinking of restricting these pods to a set of nodes in the kubernetes cluster and provision local persisted volumes in these nodes.
Why go for a Persistent Volume when the Sidecar container can pull the assets at any time from the remote location. Create a volume with EmptyDirVolumeSource and mount it on both the containers in the Pod. The Sidecar container has Write permissions on the volume and the main container has a Read permission.
From the description of your issue, it looks like distributed file systems might be what you are looking for.
For example CephFS and Glusterfs are supported in Kubernetes ( Volumes, PersistentVolumes ) and have a good set of capabilities like concurrent access ( both ) and PVC expanding ( Glusterfs ):
cephfs
A cephfs volume allows an existing CephFS volume to be mounted into your Pod. Unlike emptyDir, which is erased when a Pod is
removed, the contents of a cephfs volume are preserved and the volume
is merely unmounted. This means that a CephFS volume can be
pre-populated with data, and that data can be “handed off” between
Pods. CephFS can be mounted by multiple writers simultaneously.
Important: You must have your own Ceph server running with the share
exported before you can use it.
See the CephFS example for more details.
glusterfs
A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod.
Unlike emptyDir, which is erased when a Pod is removed, the contents
of a glusterfs volume are preserved and the volume is merely
unmounted. This means that a glusterfs volume can be pre-populated
with data, and that data can be “handed off” between Pods. GlusterFS
can be mounted by multiple writers simultaneously.
Important: You must have your own GlusterFS installation running
before you can use it.
See the GlusterFS example for more details.
For more information about these topics check out the following links:
Using Existing Ceph Cluster for Kubernetes Persistent Storage
GlusterFS Native Storage Service for Kubernetes
TWO DAYS OF PAIN OR HOW I DEPLOYED GLUSTERFS CLUSTER TO KUBERNETES

Does rexray support multi host volume mounts?

I want to share a volume across multiple containers , in docker swarm .
I need it to be such that all the containers have R/W access to this volume at any point of time .
If not rexray ,is there any other docker volume plugin which would enable me to do the same thing ?
The rexray documentation doesn't state the fact clearly.
REX-Ray has to use some backend storage driver. This is more about the storage, which has to support multiple read/write connections to the same volume. If you truly need multiple read/write, some options include:
REX-Ray with AWS EFS driver. EFS supports multiple NFS r/w connections.
https://portworx.com which will replicate data between nodes.
REX-Ray with any custom NFS storage.
Maybe a custom storage solution with drivers from Docker Store.

Resources