How to delete volumes in swarm cluster? - docker

I have a swarm cluster with one manager and another normal node , when I create a swarm service I am creating with mount type ,mount source and mount target . It creates the volume with the same name in both manger and node and starts the container and my service is up.
When I release the service the volume created along with the service was not deleted, this is still fine.
The problem I am facing is when I delete the volume with the same endpoint it's only deleting the volume in swarm manager, the volume created in the node while creating the service still exists.
I want the manager to delete all the volumes which is created along with the swarm service. Is there a way ??

After so much of analysis here is the theory.
if you are instructing swarm to create the service with volume, Swarm is only performing actions on creating the services inside the cluster i.e on the multiple nodes yes when you send the volume details yes it does creates the volume as well but while releasing the service it fails to check in the worker nodes for existence of volume while releasing Its the bug in docker
I have raised the bug in docker for it.
As of now there is no other way than manually releasing the volume from worker nodes after releasing the swarm service .

As far as I know a volume is only created on nodes where a container is created. Is it possible that your service fails to start on one node, ends up on the other and somehow swarm doesn't clean up? If thats the case write an issue in github.
Update (from comments):
According to the docker service create documentation:
A named volume is a mechanism for decoupling persistent data needed by your container from the image used to create the container and from the host machine. Named volumes are created and managed by Docker, and a named volume persists even when no container is currently using it. Data in named volumes can be shared between a container and the host machine, as well as between multiple containers. Docker uses a volume driver to create, manage, and mount volumes. You can back up or restore volumes using Docker commands.
So if you are using named volumes the correct answer would be why are they removed on the manager and where they ever created there?

Related

Mount rexray/ceph volume in multiple containers on Docker swarm

What I have done
I have built a Docker Swarm cluster where I am running containers that have persistent data. To allow the container to move to another host in the event of failure I need resilient shared storage across the swarm. After looking into the various options I have implemented the following:
Installed a Ceph Storage Cluster across all nodes of the Swarm and create a RADOS Block Device (RBD).
http://docs.ceph.com/docs/master/start/quick-ceph-deploy/
Installed Rexray on each node and configure it to use the RBD created above. https://rexray.readthedocs.io/en/latest/user-guide/storage-providers/ceph/
Deploy a Docker stack that mounts a volume using the rexray driver e.g.
version: '3'
services:
test-volume:
image: ubuntu
volumes:
- test-volume:/test
volumes:
test-volume:
driver: rexray
This solution is working in that I can deploy a stack, simulate a failure on the node that is running then observe the stack restarted on another node with no loss of persistent data.
However, I cannot mount a rexray volume in more than one container. My reason for doing is to use a short lived "backup container" that simply tars the volume to a snapshot backup while the container is still running.
My Question
Can I mount my rexray volumes into a second container?
The second container only needs read access so it can tar the volume to a snapshot backup while keeping the first container running.
Unfortunately the answer is no, in this use case rexray volumes cannot be mounted into a second container. Some information below will hopefully assist anyone heading down a similar path:
Rexray does not support multiple mounts:
Today REX-Ray was designed to actually ensure safety among many hosts that could potentially have access to the same host. This means that it forcefully restricts a single volume to only be available to one host at a time. (https://github.com/rexray/rexray/issues/343#issuecomment-198568291)
But Rexray does support a feature called pre-emption where:
..if a second host does request the volume that he is able to forcefully detach it from the original host first, and then bring it to himself. This would simulate a power-off operation of a host attached to a volume where all bits in memory on original host that have not been flushed down is lost. This would support the Swarm use case with a host that fails, and a container trying to be re-scheduled.
(https://github.com/rexray/rexray/issues/343#issuecomment-198568291)
However, pre-emption is not supported by the Ceph RBD.
(https://rexray.readthedocs.io/en/stable/user-guide/servers/libstorage/#preemption)
You could of course have a container that attaches the volume, and then exports it via nfs on a dedicated swarm network, the client containers could then access it via nfs

Mapping Docker Volumes in a Cluster/Docker-Swarm

I am running Docker Swarm with 3-Masters and 3-Worker nodes.
On this Swarm, I have an Elastic search container which reads data from multiple log files and then writes the data into a directory. Later it reads data from this directory and shows me the logs on a UI.
Now the problem is that I am running only 1 instance of this Elastic Search Container and say for some reason it goes down then docker swarm start this on another machine. Since I have 6 machines I have created the particular directory on all the machines, but whenever I start the docker stack the ES container starts and starts reading/writing directory on the machine where it is running.
Is there a way that we can
Force docker swarm to run a container on a particular machine
or
Map volume to shared/network drive
Both are available.
Force docker swarm to run a container on a particular machine
Add --constraint flag when executing docker service create. Some introduction.
Map volume to shared/network drive
Use docker volume with a driver that supports writing files to an external storage system like NFS or Amazon S3. More introduction.

peer container replaced after docker swarm service update

When I use 'docker service update' on a peer container in my docker swarm, the peer get's replaced by a new one.
The new one has almost the same name e.g.
old: peer1.org1-223d2d23d23 new: one peer1.org1-345245634ff4
It has access to all files like channel.tx, genesis.block and mychannel.block. in the peer/channel-artifacts map. But the new peer has not joined the channel and no chaincode is installed on it.
I can't join the channel or install chaincode, because for peer1.org1 it already is the case. However if I fetch the oldest channel block I can. But this gives a strange situation I think.
So my question is
How can a peer service in docker swarm still be part of the stack/swarm after an service update or downtime without it being a completely new peer container?
When you upgrade a container in Docker, Docker Swarm or Kubernetes, you are essentially replacing the container (i.e. there is really no concept of an in-place upgrade of the container) with another one which receives the same settings, environment, etc.
When running Docker in standalone mode and using volumes, this is fairly transparent as the new container is deployed on the same host as the prior container and therefore will mount the same volumes, etc.
It seems like you are already mounting some type of volume from shared storage / filesystem in order to access channel.tx, etc.
What you also need to do is actually make sure that you use volumes for the persistent storage used / required by the peer (and orderer, etc for that matter).
On the peer side, the two key attributes in core.yaml are:
peer.fileSystemPath - this defaults to /var/hyperledger/production and is where the ledger, installed chaincodes, etc are kept. The corresponding environment variable is CORE_PEER_FILESYSTEMPATH.
peer.mspConfigPath - where the local MSP info is stored. The corresponding environment variable is CORE_PEER_MSPCONFIGPATH.
You will want to mount those as volumes and given you are using Swarm those volumes will need to be available on a shared storage which is available on all of your Swarm hosts.

Are Docker Volumes machine-specific

I'm new to Docker Swarm. As I understand, Docker Swarm allows you to abstract from clustering. Means you don't care on which hardriwe container is deployed.
On the other hand, the standard way to handle database in Docker - is to write data outside Docker container (to avoid copy-on-write behaviour). That's achieved by mounting a Volume and write db-related data to it. The important thing here - are Volumes machine-specific? Are Docker & Docker Swarm clever enough to mount a Volume on the machine it's needed?
Example:
I have 3 machines and 3 microservices/containers. All of them are deployed through Docker Swarm. Only one microservice/container must connect to a database. So I need to mount Volume only on one machine. But on which?
Databases and similar stateful applications are still a hard thing to deal with when it comes to Docker swarm and other orchestration frameworks. Ideally, containers should be able to run on any node in the swarm, but the problem comes when you need to persist data beyond the container's lifecycle.
Mounting a volume is the Docker way to persist data, however this ties the container with a specific node as volumes are created on the specific nodes. There are many projects that try to solve this problem and provide some sort of distributed storage.
There was a project called Flocker that deals with the above problem (it’s no longer maintained). There is also a newer project called REXRAY.
Are Docker & Docker Swarm clever enough to mount a Volume on the machine it's needed?
By default, no. Docker swarm will choose one of the nodes and deploy the container on it. However, you can work around this problem:
First, you need to define a named volume in you Stackfile/Composefile under the service definition.
Second, you need to use node Placement Constraints to restrict where the database container should run.
If you do not you a distributed storage tool, then when it comes to databases and similar stateful containers that need volumes, you need to restrict the container to a specific nodes.

Docker swarm NFS volumes,

I am playing with docker's 1.12 swarm with Orchestration! But there is one issue I am not able to find an answer to:
In this case if you're running a service like nginx or redis you don't worry about the data persistence,
But if you're running a service like a database we need data persistance so if something happens to your docker instance the master will shuttle the docker instance to one of the available nodes, by default docker doesn't move data volumes to other nodes to address this problem. We can use third party plugins like Flocker (https://github.com/ClusterHQ/flocker), Rexray ("https://github.com/emccode/rexray") to solve the issue.
But the problem with this is: when one node fails you lose the data. Flocker or Rexray does not deal with this.
We can solve this if we use something like NFS. I mount the same volume to across my nodes in this case we don't have to move the data between two nodes. If one of the nodes fail its need to remember the docker mount location, can we do this? If so can we achieve this with docker Swarm Built-In Orchestration!
Using Rexray, then the data is stored outside the docker swarm nodes (in Amazon S3, Openstack Cinder, ...). So If you loose a node, you won't loose your persistent data. If your scheduler mounts a new container which needs the data on another host, it will retrieve the external volume using rexray plugin and you're ok to go.
Note: your external provider needs to allow you to perform forced detach of the volume from the now unavailable old nodes.

Resources