Share docker images between hosts with NFS - docker

I'building a mesosphere infrastructure on AWS instances with 3 master servers (running zookeeper, mesos-master, marathon and haproxy) and N slaves (running mesos-slave and docker).
If I run the same container on different slaves marathon downloads on each slave the same image. I would like to share one single nfs export (say on master1) and mount it on every slave in order to have a unique storage for the images.
Im using Ubuntu on the EC2 instances, so the storage driver used by default is device-mapper. I set up the slaves to mount /var/lib/docker/devicemapper and /var/lib/docker/graph but it ends up with this error: "stale NFS file handle"
What I would like to understand is:
There is a way to do it using a different storage driver?
In any case is the docker daemon doing some look on the files in this directory?
Is my approach wrong or possible leading into "cconcurrency access issues?

Instead of using NFS to expose the backing file system, I think it would be easier to set up docker-registry (with a volume on the master1, so the data is persisted there) and on the other nodes pull images via docker protocol by e.g. docker pull master1:5000/image:latest

Related

Mapping Docker Volumes in a Cluster/Docker-Swarm

I am running Docker Swarm with 3-Masters and 3-Worker nodes.
On this Swarm, I have an Elastic search container which reads data from multiple log files and then writes the data into a directory. Later it reads data from this directory and shows me the logs on a UI.
Now the problem is that I am running only 1 instance of this Elastic Search Container and say for some reason it goes down then docker swarm start this on another machine. Since I have 6 machines I have created the particular directory on all the machines, but whenever I start the docker stack the ES container starts and starts reading/writing directory on the machine where it is running.
Is there a way that we can
Force docker swarm to run a container on a particular machine
or
Map volume to shared/network drive
Both are available.
Force docker swarm to run a container on a particular machine
Add --constraint flag when executing docker service create. Some introduction.
Map volume to shared/network drive
Use docker volume with a driver that supports writing files to an external storage system like NFS or Amazon S3. More introduction.

Docker volumes vs nfs

I would like to know if it is logical to use a redundant NFS/GFS share for webcontent instead of using docker volumes?
I'm trying to build a HA docker environment with the least amount of additional tooling. I would like to stick to 3 servers, each a docker swarm node.
Currently I'm looking into storage: an NFS/GFS filesystem cluster would require additional tooling for a small environment (100gb max storage). I would like to only use native docker supported configurations. So I would prefer to use volumes and share those across containers. However, those volumes are, for as far as I know, not synchronized to other swarm nodes by default.. so if the swarm node that hosts the data volume goes down it will be unavailable for each container across the swarm..
A few things, that together, should answer your question:
Volumes use a driver, and the default driver in Docker run and Swarm services is the built-in "local" driver which only supports file paths that are mounted on that host. For using shared storage with Swarm services, you'll want a 3rd party plugin driver, like REX-Ray. An official list is here: store.docker.com
What you want to look for in a volume driver is one that's "docker swarm aware" that will re-attach volumes to a new task created if old Swarm service task is killed/updated. Tools like REX-Ray are almost like a "persistent data orchestrator" that ensures volumes are attached to the proper node where they are needed.
I'm not sure what web content you're talking about, but if it's code or templates, it should be built into the image. If you're talking about user uploaded content that needs to be backed up, then yep a volume sounds like the right way.

Persistent storage solution for Docker on AWS EC2

I want to deploy a node-red server on my AWS EC2 cluster. I got the docker image up and running without problems. Node-red stores the user flows in a folder named /data. Now when the container is destroyed the data is lost. I have red about several solutions where you can mount a local folder into a volume. What is a good way to deal with persistent data in AWS EC2?
My initial thoughts are to use a S3 volume or mount a volume in the task definition.
It is possible to use a volume driver plugin with docker that supports mapping EBS volumes.
Flocker was one of the first volume managers, it supports EBS and has evolved to support a lot of different back ends.
Cloudstor is Dockers volume plugin (It comes with Docker for AWS/Azure).
Blocker is an EBS only volume driver.
S3 doesn't work well for all file system operations as you can't update a section of an object, so updating 1 byte of a file means you have to write the entire object again. It's also not immediately consistent so a write then read might give you odd/old results.
The EBS volume can only be attached to one instance which means that you can only run your docker containers in one EC2 instance. Assuming that you would like to scale your solution in future with many containers running in ECS cluster then you need to look into EFS. It’s a shared system from AWS. The only issue is performance degradation of EFS over EBS.
The easiest way (and the most common approach) is run your docker with -v /path/to/host_folder:/path/to/container_folder option, so the container will refer to host folder and information will stay after it will be restarted or recreated. Here the detailed information about docker volume system.
I would use AWS EFS. It is like a NAS in that you can have it mounted to multiple instances at the same time.
If you are using ECS for your docker host the following guide may be helpful http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_efs.html

Persistent storage for WebDAV server on docker swarm?

How can I achieve a persistent storage for a WebDAV server running on several/any swarm nodes?
It's part of a docker-compose app running on my own vSphere infrastructure.
I was thinking about mounting an external NFS share from insde the containers (at the OS level, not docker volumes) but then how would that be better than having WebDAV outside the swarm cluster?
I can think of 2 options:
Glusterfs
This option is vSphere independent. You can create replicated bricks and store your volumes on them. Exposing same volume to multiple docker hosts. So in case of node failure the container will get restarted on another node and has it's persistent storage with it. You can also mount the persistent data on multiple containers.
There is one catch: Same diskspace will be consumed on multiple nodes.
Docker-Volume-vSphere
This option requires vsphere hosts. You can create docker volumes on vmfs datastores. they will be shared between docker hosts (virtual machines). So in case of failure the container restarts on another node and has persistent data available. Multiple containers can share a single volume.

How to mount OpenStack container in a Docker container

I am new to OpenStack. I saw there is a feature called containers in OpenStack. I think those containers are not the same thing as Docker containers. I have understood OpenStack containers are just file storage (volume?) Right or wrong?
But is there a way to mount an OpenStack container in a Docker container?
I want to have a Docker container which contains only "system files" (/bin, /usr, apache, mysql) and to put all my configuration files and PHP files in an OpenStack container.
Containers in OpenStack are applied to the Object Storage (OpenStack SWIFT). Swift is the OpenStack-counterpart of AWS S3. What you call "Bucket" in S3, is a "container" in SWIFT.
Nevertheless, OpenStack is including "docker-container" support trough two projects:
Nova-Docker (Compute component in NOVA, wotking at "hypervisor" level, but whith docker instead of KVM/QEMU/Libvirt)
OpenStack Magnum: Docker-container orchestration solution for OpenStack.
You can see any of those projects at:
Magnum: https://wiki.openstack.org/wiki/Magnum
Nova-docker: https://github.com/openstack/nova-docker

Resources