What is the difference between ceph rbd and iscsi in the context of Cinder volume (of type ceph)? - docker-swarm

I am trying to get a volume driver to work on a docker swarm which uses nova instances and cinder volume and makes persistent storage available for the swarm services.
I can only create cinder volumes of type ceph. I am modifying the driver and trying to get it to work for me, but I cannot get the TargetPortal, TargetIQN etc to do the iscsi attach. Should I be using RBD instead of iscsi since the volume type is ceph?

I got this working using the Rexray.
iscsi is not supported for cinder in my case. That is why these details are not returned by openstack endpoint during attach for the cinder-docker-driver.

Related

Binding of volume to Docker Container via Kubernetes

I'm new to the area, sorry if my question sounds dumb.
What I'm looking for: I have a containers pod, where one of the containers (alpine based) should read/write from/to the customer's provided file. I don't want to limit customer on how to provide file (or at least to support most common ways).
And file's size might be huge sometimes (not sure if that requirement makes any difference).
The more flexibility here the better.
From the initial search I found there are multiple ways to bind the volume/directory to docker's container:
Docker bind mount - sharing dir between Host and container (nice to have)
Add a docker volume to the pod (must have)
Mount AWS S3 bucket to docker's container (must have)
any other ways of supplying file access to the container? Let's say from the remote machine via sftp access?
But main question - is it all possible to configure via Kubernetes?
Ideally in the same yaml file that starts the containers?
Any hints/examples are very welcome!
It surely is possible!
Like there are volume mount for a docker container, there are volume mounts in Kubernetes as well.
This is achieved using Persistent Volume Claim, PVC. These are Pod lifecycle independent storage classes to store the data in the volume mount.
Understand more about the concept here: https://kubernetes.io/docs/concepts/storage/persistent-volumes/

What are the differences between the concepts of storage driver and volume driver in Docker?

I'm studying about Docker and I couldn't understand very well the difference between what storage drivers are used for and what volume drivers are used for.
My theory (please correct me if I'm wrong) is the storage drivers manage the way Docker deals underneath with the writable layer, can use overlay, overlay2, aufs, zfs, btrfs and so on.
Volume drivers, however, deal with volumes underneath, like it can be a local volume (in this case I think it will use a storage driver) or can be a remote one (like EBS).
Am I right?
Docker uses storage drivers to store image layers, and to store data in the writable layer of a container. Docker uses Volume drivers for write-intensive data, data that must persist beyond the container’s lifespan, and data that must be shared between containers. So, I understand Storage Drivers are used with image and container layers while Volume Drivers are used for persistent container application data. See the first three paragraphs of this Docker documentation: https://docs.docker.com/storage/storagedriver/
Docker engine volume enable engine deployments to be integrated with external storage systems such as Amazon EBS, and enable data volumes to persist beyond the lifetime of a single Docker host. Here the term 'local' in concept of Docker volume driver means the volumes esdata1 and esdata2 are created on the same Docker host where you run your container. By using other Volume plugins, e.g.,-driver=flocker. You are able to create a volume on a external host and mount it to the local host, say, /data-path.

Mount rexray/ceph volume in multiple containers on Docker swarm

What I have done
I have built a Docker Swarm cluster where I am running containers that have persistent data. To allow the container to move to another host in the event of failure I need resilient shared storage across the swarm. After looking into the various options I have implemented the following:
Installed a Ceph Storage Cluster across all nodes of the Swarm and create a RADOS Block Device (RBD).
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
Installed Rexray on each node and configure it to use the RBD created above. https://rexray.readthedocs.io/en/latest/user-guide/storage-providers/ceph/
Deploy a Docker stack that mounts a volume using the rexray driver e.g.
version: '3'
services:
test-volume:
image: ubuntu
volumes:
- test-volume:/test
volumes:
test-volume:
driver: rexray
This solution is working in that I can deploy a stack, simulate a failure on the node that is running then observe the stack restarted on another node with no loss of persistent data.
However, I cannot mount a rexray volume in more than one container. My reason for doing is to use a short lived "backup container" that simply tars the volume to a snapshot backup while the container is still running.
My Question
Can I mount my rexray volumes into a second container?
The second container only needs read access so it can tar the volume to a snapshot backup while keeping the first container running.
Unfortunately the answer is no, in this use case rexray volumes cannot be mounted into a second container. Some information below will hopefully assist anyone heading down a similar path:
Rexray does not support multiple mounts:
Today REX-Ray was designed to actually ensure safety among many hosts that could potentially have access to the same host. This means that it forcefully restricts a single volume to only be available to one host at a time. (https://github.com/rexray/rexray/issues/343#issuecomment-198568291)
But Rexray does support a feature called pre-emption where:
..if a second host does request the volume that he is able to forcefully detach it from the original host first, and then bring it to himself. This would simulate a power-off operation of a host attached to a volume where all bits in memory on original host that have not been flushed down is lost. This would support the Swarm use case with a host that fails, and a container trying to be re-scheduled.
(https://github.com/rexray/rexray/issues/343#issuecomment-198568291)
However, pre-emption is not supported by the Ceph RBD.
(https://rexray.readthedocs.io/en/stable/user-guide/servers/libstorage/#preemption)
You could of course have a container that attaches the volume, and then exports it via nfs on a dedicated swarm network, the client containers could then access it via nfs

same data volume attached to multiple container even in different host

I'm able to bind a docker volume to a specific container in a swarm thanks to flocker, but now i would have multiple replicas of my server (to do load balancing) and so i'm searching something to bind the same data volume to multiple replicas of a docker service.
In flocker documentaiton i have found that
Can more than one container access the same volume? Flocker works by
creating a 1 to 1 relationship of a volume and a container. This means
you can have multiple volumes for one container, and those volumes
will always follow that container.
Flocker attaches volumes to the individual agent host (docker host)
and this can only be one host at a time because Flocker attaches
Block-based storage. Nodes on different hosts cannot access the same
volume, because it can only be attached to one node at a time.
If multiple containers on the same host want to use the same volume,
they can, but be careful because multiple containers accessing the
same storage volume can cause corruption.
Can I attach a single volume to multiple hosts? Not currently, support
from multi-attach backends like GCE in Read Only mode, or NFS-like
backends like storage, or distributed filesystems like GlusterFS would
need to be integrated. Flocker focuses mainly on block-storage uses
cases that attach a volume to a single node at a time.
So i think is no possible to do what i want with flocker.
I could use a different orchestrator (k8s) if that could help me, even if i have no experience with that.
I would not use NAS/NFS or anything distribuited filesystems.
Any suggestions?
Thanks in advance.
In k8s, you can mount volume to different Pods at the same time if technology that backs the volume supports shared access.
As mentioned in Kubernetes Persistent Volumes:
Access Modes A PersistentVolume can be mounted on a host in any way
supported by the resource provider. As shown below, providers will
have different capabilities and each PV’s access modes are set to the
specific modes supported by that particular volume. For example, NFS
can support multiple read/write clients, but a specific NFS PV might
be exported on the server as read-only. Each PV gets its own set of
access modes describing that specific PV’s capabilities.
The access modes are:
ReadWriteOnce – the volume can be mounted as read-write by a single node
ReadOnlyMany – the volume can be mounted read-only by many nodes
ReadWriteMany – the volume can be mounted as read-write by many nodes
Types of volumes that supports ReadOnlyMany mode:
AzureFile
CephFS
FC
FlexVolume
GCEPersistentDisk
Glusterfs
iSCSI
Quobyte
NFS
RBD
ScaleIO
Types of volumes that supports ReadWriteMany mode:
AzureFile
CephFS
Glusterfs
Quobyte
RBD
PortworxVolume
VsphereVolume(works when pods are collocated)

Docker and Cinder, is it possible in openstack?

Is it possible to mount Cinder volumes on docker containers in openstack?
And if it is, is there a way to encrypt data leaving the container to the cinder volume?
I was thinking of mounting the volume as a loopback device and encrypt the data as it was being flushed to the disk. Ist this possible?
Kind regards
It is not currently possible to mount Cinder volumes inside a Docker container in OpenStack.
A fundamental problem is that Docker is filesystem-based, rather than block-device-based. Any block device -- like a Cinder volume -- would need to be formatted with a filesystem and mounted prior to starting the container. While it might be technically feasible, the necessary support for this does not yet exist.
The Manila project may be a better solution for adding storage to containers, but I haven't looked into that and I don't know if (a) the project works at all yet and (b) it it works with nova-docker.
If you're not using the nova-docker driver but are instead using the Heat plugin for Docker, you can mount host volumes in a container similar to docker run -v ..., but making this work seamlessly across multiple nodes in a multi-tenant setting may be difficult or impossible.

Resources