Docker volumes vs nfs - docker

I would like to know if it is logical to use a redundant NFS/GFS share for webcontent instead of using docker volumes?
I'm trying to build a HA docker environment with the least amount of additional tooling. I would like to stick to 3 servers, each a docker swarm node.
Currently I'm looking into storage: an NFS/GFS filesystem cluster would require additional tooling for a small environment (100gb max storage). I would like to only use native docker supported configurations. So I would prefer to use volumes and share those across containers. However, those volumes are, for as far as I know, not synchronized to other swarm nodes by default.. so if the swarm node that hosts the data volume goes down it will be unavailable for each container across the swarm..

A few things, that together, should answer your question:
Volumes use a driver, and the default driver in Docker run and Swarm services is the built-in "local" driver which only supports file paths that are mounted on that host. For using shared storage with Swarm services, you'll want a 3rd party plugin driver, like REX-Ray. An official list is here: store.docker.com
What you want to look for in a volume driver is one that's "docker swarm aware" that will re-attach volumes to a new task created if old Swarm service task is killed/updated. Tools like REX-Ray are almost like a "persistent data orchestrator" that ensures volumes are attached to the proper node where they are needed.
I'm not sure what web content you're talking about, but if it's code or templates, it should be built into the image. If you're talking about user uploaded content that needs to be backed up, then yep a volume sounds like the right way.

Related

Is it practical to mount a block storage to multiple VPS running Docker Swarm for shared storage?

Looking at multiple options to implement a shared storage for a Docker Swarm, I can see most of them require a special Docker plugin:
sshFs
CephFS
glusterFS
S3
and others
... but one thing that is not mentioned anywhere is just mounting a typical block storage to all VPS nodes running the Docker Swarm. Is this option impractical and thus not mentioned on the Internet? Am I missing something?
My idea is as follows:
Create a typical Block Storage (like e.g. one offered by DigitalOcean or Vultr).
Mount it to your VPS filesystem.
Mount a folder from that Block Storage as a volume in the Docker Container / Docker Worker with using a "local" driver.
Sounds the simplest and most obvious to me. Why people are using more complicated setups like sshFs, CephFS etc? And most importantly, is the implementation I described viable, and if so, what are the drawbacks of doing it this way?
The principal advantage of using a volume plugin over mounted storage comes down to the ability to create storage volumes dynamically, and associate them with namespaces.
i.e. with docker managing storage for a volume via a "volumes: data:" directive in a compose file, a volume will be created for each named stack that is deployed.
Using the local driver and mounts, you the swarm admin now need to ensure that no two stacks are trying to use /mnt/data.
Once you pass that hurdle, some platforms have limitations to the number of hosts a block storage can be mounted on to.
Theres also the security angle to consider - with the volume mapped like that a compromose to any service on any host can potentially expose all your data to an attacker, where a volume plugin will expose just exactly the data mounted on that container.
All that said - docker swarm is awesome and the current plugin space is lacking - if mounting block storage is what it takes to get a workable storage solution I say do it. Hopefully the CSI support will be ready before year end however.

Share volume in docker swarm for many nodes

I'm facing a big challenge. Trying run my app on 2 VPS in docker swarm. Containers that use volumes should use shared volume between nodes.
My solution is:
Use plugin glusterFS and mount volume on every node using nfs. NFS generate single point of failure so when something go wrong my data are gone. (it's not look good maybe im wrong)
Use Azure Storage - store data as blob ( Azure Data Lake Storage Gen2 ). But my main is problem how can i connect to azure storage using docker-compose.yaml? I should declarate volume in every service that use volume and declare volume in volume section. I don't have idea how to do that.
Docker documentation about it is gone. Should be here https://docs.docker.com/docker-for-azure/persistent-data-volumes/.
Another option is use https://hub.docker.com/r/docker4x/cloudstor/tags?page=1&ordering=last_updated but last update was 2 years ago so its probably not supported anymore.
Do i have any other options and which share volume between nodes is best solution?
There are a number of ways of dealing with creating persistent volumes in docker swarm, none of them particularly satisfactory:
First up, a simple way is to use nfs, glusterfs, iscsi, or vmware to multimount the same SAN storage volume onto each docker swarm node. Services just mount volumes as /mnt/volumes/my-sql-workload
On the one hand its really simple, on the other hand there is literally no access control and you can easilly accidentally load services pointing at each others data.
Next, commercial docker volume plugins for SANs. If you are lucky and possess a Pure Storage, NetApp or other such SAN array, some of them still offer docker volume plugins. Trident for example if you have a NetApp.
Third. if you are in the cloud, the legacy swarm offerings on Azure and Aws included a built in "cloudstor" volume driver but you need to dig really deep to find it in their legacy offering.
Four, there are a number of opensource or free volume plugins that will mount volumes from nfs, glusterfs or other sources. But most are abandoned or very quiet. The most active I know of is marcelo-ochoa/docker-volume-plugins
I wasn't particularly happy with how those plugins mounted pre-existing volumes, but made operations like docker volume create hard, so I made my own, but really
Swarm Cluster Volume Support with CSI Plugins is hopefully going to drop in 2021¹. Which hopefully is a solid rebuttal to all the problems above.
¹Its now 2022 and the next version of Docker has not yet gone live with CSI support. Still we wait.
In my opinion, a good solution could be to create a GlusterFS cluster, configure a single volume and mount it in every Docker Swarm node (i.e. in /mnt/swarm-storage).
Then, for every Container that needs persistent storage, bind-mount a subdirectory of the GlusterFS volume inside the container.
Example:
services:
my-container:
...
volumes:
- type: bind
source: /mnt/swarm-storage/my-container
target: /a/path/inside/the/container
This way, every node shares the same storage, so that a given container could be instantiated indifferently on every cluster node.
You don't need any Docker plugin for a particular storage driver, because the distributed storage is transparent to the Swarm cluster.
Lastly, GlusterFS is a distributed filesystem, designed to not have a single point of failure and you can cluster it on as many node you like (contrary to NFS).

Are Docker Volumes machine-specific

I'm new to Docker Swarm. As I understand, Docker Swarm allows you to abstract from clustering. Means you don't care on which hardriwe container is deployed.
On the other hand, the standard way to handle database in Docker - is to write data outside Docker container (to avoid copy-on-write behaviour). That's achieved by mounting a Volume and write db-related data to it. The important thing here - are Volumes machine-specific? Are Docker & Docker Swarm clever enough to mount a Volume on the machine it's needed?
Example:
I have 3 machines and 3 microservices/containers. All of them are deployed through Docker Swarm. Only one microservice/container must connect to a database. So I need to mount Volume only on one machine. But on which?
Databases and similar stateful applications are still a hard thing to deal with when it comes to Docker swarm and other orchestration frameworks. Ideally, containers should be able to run on any node in the swarm, but the problem comes when you need to persist data beyond the container's lifecycle.
Mounting a volume is the Docker way to persist data, however this ties the container with a specific node as volumes are created on the specific nodes. There are many projects that try to solve this problem and provide some sort of distributed storage.
There was a project called Flocker that deals with the above problem (it’s no longer maintained). There is also a newer project called REXRAY.
Are Docker & Docker Swarm clever enough to mount a Volume on the machine it's needed?
By default, no. Docker swarm will choose one of the nodes and deploy the container on it. However, you can work around this problem:
First, you need to define a named volume in you Stackfile/Composefile under the service definition.
Second, you need to use node Placement Constraints to restrict where the database container should run.
If you do not you a distributed storage tool, then when it comes to databases and similar stateful containers that need volumes, you need to restrict the container to a specific nodes.

What are proven options for using ZFS remotely as a volume backend for docker?

So with the introduction of volumes we do not longer use data-only containers! Nice. But right now I have this nice home-grown ZFS appliance and I want to use it as a backend for my docker volumes (of course Docker is running on other hosts).
I can export ZFS as NFS relatively easy, what are proven (i.e. battle-tested) options for using NFS as a volume backend for docker?
A google search shows me the following possibilities:
using Flocker, I could use the flocker-agent-thingie on the zfs appliance. However with Flocker being scrapped, I am concerned...
using the local volume backend and simply mount the nfs export on the docker host -> does not scale, but might do the job.
using a specialized volume plugin like https://github.com/ContainX/docker-volume-netshare to utilize nfs
something alike from Rancher: https://github.com/rancher/convoy
or going big and use Kubernetes and NFS as persistent storage https://kubernetes.io/docs/concepts/storage/volumes/#nfs
I have pretty extensive single-host Docker knowledge - which option is a stable one? Performance is not that important to me (alas the use case is a dockerized OwnCloud/NextCloud stack, throughput is limited by the internet connection)

Docker swarm NFS volumes,

I am playing with docker's 1.12 swarm with Orchestration! But there is one issue I am not able to find an answer to:
In this case if you're running a service like nginx or redis you don't worry about the data persistence,
But if you're running a service like a database we need data persistance so if something happens to your docker instance the master will shuttle the docker instance to one of the available nodes, by default docker doesn't move data volumes to other nodes to address this problem. We can use third party plugins like Flocker (https://github.com/ClusterHQ/flocker), Rexray ("https://github.com/emccode/rexray") to solve the issue.
But the problem with this is: when one node fails you lose the data. Flocker or Rexray does not deal with this.
We can solve this if we use something like NFS. I mount the same volume to across my nodes in this case we don't have to move the data between two nodes. If one of the nodes fail its need to remember the docker mount location, can we do this? If so can we achieve this with docker Swarm Built-In Orchestration!
Using Rexray, then the data is stored outside the docker swarm nodes (in Amazon S3, Openstack Cinder, ...). So If you loose a node, you won't loose your persistent data. If your scheduler mounts a new container which needs the data on another host, it will retrieve the external volume using rexray plugin and you're ok to go.
Note: your external provider needs to allow you to perform forced detach of the volume from the now unavailable old nodes.

Resources